00:00:00.001 Started by upstream project "autotest-per-patch" build number 127082 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.110 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.111 The recommended git tool is: git 00:00:00.111 using credential 00000000-0000-0000-0000-000000000002 00:00:00.113 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.147 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.214 > git --version # 'git version 2.39.2' 00:00:00.214 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.229 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.229 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.213 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.224 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.235 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.235 > git config core.sparsecheckout # timeout=10 00:00:06.248 > git read-tree -mu HEAD # timeout=10 00:00:06.265 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.295 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.296 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.397 [Pipeline] Start of Pipeline 00:00:06.408 [Pipeline] library 00:00:06.409 Loading library shm_lib@master 00:00:06.409 Library shm_lib@master is cached. Copying from home. 00:00:06.423 [Pipeline] node 00:00:06.433 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.434 [Pipeline] { 00:00:06.443 [Pipeline] catchError 00:00:06.445 [Pipeline] { 00:00:06.456 [Pipeline] wrap 00:00:06.464 [Pipeline] { 00:00:06.470 [Pipeline] stage 00:00:06.471 [Pipeline] { (Prologue) 00:00:06.632 [Pipeline] sh 00:00:06.919 + logger -p user.info -t JENKINS-CI 00:00:06.939 [Pipeline] echo 00:00:06.940 Node: WFP16 00:00:06.948 [Pipeline] sh 00:00:07.243 [Pipeline] setCustomBuildProperty 00:00:07.256 [Pipeline] echo 00:00:07.258 Cleanup processes 00:00:07.263 [Pipeline] sh 00:00:07.545 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.545 2181943 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.555 [Pipeline] sh 00:00:07.835 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.835 ++ grep -v 'sudo pgrep' 00:00:07.835 ++ awk '{print $1}' 00:00:07.835 + sudo kill -9 00:00:07.835 + true 00:00:07.850 [Pipeline] cleanWs 00:00:07.861 [WS-CLEANUP] Deleting project workspace... 00:00:07.861 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.868 [WS-CLEANUP] done 00:00:07.872 [Pipeline] setCustomBuildProperty 00:00:07.889 [Pipeline] sh 00:00:08.169 + sudo git config --global --replace-all safe.directory '*' 00:00:08.256 [Pipeline] httpRequest 00:00:08.283 [Pipeline] echo 00:00:08.284 Sorcerer 10.211.164.101 is alive 00:00:08.291 [Pipeline] httpRequest 00:00:08.295 HttpMethod: GET 00:00:08.296 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.296 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.315 Response Code: HTTP/1.1 200 OK 00:00:08.316 Success: Status code 200 is in the accepted range: 200,404 00:00:08.316 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:17.236 [Pipeline] sh 00:00:17.519 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:17.536 [Pipeline] httpRequest 00:00:17.553 [Pipeline] echo 00:00:17.554 Sorcerer 10.211.164.101 is alive 00:00:17.561 [Pipeline] httpRequest 00:00:17.565 HttpMethod: GET 00:00:17.566 URL: http://10.211.164.101/packages/spdk_0bb5c21e286c2a526066ac6459b84bb9e7b10cac.tar.gz 00:00:17.566 Sending request to url: http://10.211.164.101/packages/spdk_0bb5c21e286c2a526066ac6459b84bb9e7b10cac.tar.gz 00:00:17.568 Response Code: HTTP/1.1 200 OK 00:00:17.569 Success: Status code 200 is in the accepted range: 200,404 00:00:17.569 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0bb5c21e286c2a526066ac6459b84bb9e7b10cac.tar.gz 00:00:34.843 [Pipeline] sh 00:00:35.127 + tar --no-same-owner -xf spdk_0bb5c21e286c2a526066ac6459b84bb9e7b10cac.tar.gz 00:00:37.675 [Pipeline] sh 00:00:37.959 + git -C spdk log --oneline -n5 00:00:37.959 0bb5c21e2 nvmf: move register nvmf_poll_group_poll interrupt to nvmf 00:00:37.959 8968f30fe nvmf/tcp: replace pending_buf_queue with nvmf_tcp_request_get_buffers 00:00:37.959 13040d616 nvmf: enable iobuf based queuing for nvmf requests 00:00:37.959 5c0b15eed nvmf/tcp: fix spdk_nvmf_tcp_control_msg_list queuing 00:00:37.959 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:37.972 [Pipeline] } 00:00:37.989 [Pipeline] // stage 00:00:37.999 [Pipeline] stage 00:00:38.001 [Pipeline] { (Prepare) 00:00:38.020 [Pipeline] writeFile 00:00:38.038 [Pipeline] sh 00:00:38.322 + logger -p user.info -t JENKINS-CI 00:00:38.335 [Pipeline] sh 00:00:38.625 + logger -p user.info -t JENKINS-CI 00:00:38.637 [Pipeline] sh 00:00:38.919 + cat autorun-spdk.conf 00:00:38.919 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.919 SPDK_TEST_NVMF=1 00:00:38.919 SPDK_TEST_NVME_CLI=1 00:00:38.919 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.919 SPDK_TEST_NVMF_NICS=e810 00:00:38.919 SPDK_TEST_VFIOUSER=1 00:00:38.919 SPDK_RUN_UBSAN=1 00:00:38.919 NET_TYPE=phy 00:00:38.926 RUN_NIGHTLY=0 00:00:38.931 [Pipeline] readFile 00:00:38.956 [Pipeline] withEnv 00:00:38.958 [Pipeline] { 00:00:38.971 [Pipeline] sh 00:00:39.256 + set -ex 00:00:39.256 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:39.256 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:39.256 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.256 ++ SPDK_TEST_NVMF=1 00:00:39.256 ++ SPDK_TEST_NVME_CLI=1 00:00:39.256 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.256 ++ SPDK_TEST_NVMF_NICS=e810 00:00:39.256 ++ SPDK_TEST_VFIOUSER=1 00:00:39.256 ++ SPDK_RUN_UBSAN=1 00:00:39.256 ++ NET_TYPE=phy 00:00:39.256 ++ RUN_NIGHTLY=0 00:00:39.256 + case $SPDK_TEST_NVMF_NICS in 00:00:39.256 + DRIVERS=ice 00:00:39.256 + [[ tcp == \r\d\m\a ]] 00:00:39.256 + [[ -n ice ]] 00:00:39.256 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:39.256 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:39.256 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:39.256 rmmod: ERROR: Module irdma is not currently loaded 00:00:39.256 rmmod: ERROR: Module i40iw is not currently loaded 00:00:39.256 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:39.256 + true 00:00:39.256 + for D in $DRIVERS 00:00:39.256 + sudo modprobe ice 00:00:39.256 + exit 0 00:00:39.266 [Pipeline] } 00:00:39.283 [Pipeline] // withEnv 00:00:39.289 [Pipeline] } 00:00:39.305 [Pipeline] // stage 00:00:39.315 [Pipeline] catchError 00:00:39.317 [Pipeline] { 00:00:39.333 [Pipeline] timeout 00:00:39.334 Timeout set to expire in 50 min 00:00:39.336 [Pipeline] { 00:00:39.351 [Pipeline] stage 00:00:39.353 [Pipeline] { (Tests) 00:00:39.370 [Pipeline] sh 00:00:39.655 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.655 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.655 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.655 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:39.655 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.655 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:39.655 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:39.655 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:39.655 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:39.655 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:39.655 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:39.655 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.655 + source /etc/os-release 00:00:39.655 ++ NAME='Fedora Linux' 00:00:39.655 ++ VERSION='38 (Cloud Edition)' 00:00:39.655 ++ ID=fedora 00:00:39.655 ++ VERSION_ID=38 00:00:39.655 ++ VERSION_CODENAME= 00:00:39.655 ++ PLATFORM_ID=platform:f38 00:00:39.655 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:39.655 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:39.655 ++ LOGO=fedora-logo-icon 00:00:39.655 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:39.655 ++ HOME_URL=https://fedoraproject.org/ 00:00:39.655 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:39.655 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:39.655 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:39.655 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:39.655 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:39.655 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:39.655 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:39.655 ++ SUPPORT_END=2024-05-14 00:00:39.655 ++ VARIANT='Cloud Edition' 00:00:39.655 ++ VARIANT_ID=cloud 00:00:39.655 + uname -a 00:00:39.655 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:39.655 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:42.194 Hugepages 00:00:42.194 node hugesize free / total 00:00:42.194 node0 1048576kB 0 / 0 00:00:42.194 node0 2048kB 0 / 0 00:00:42.194 node1 1048576kB 0 / 0 00:00:42.194 node1 2048kB 0 / 0 00:00:42.194 00:00:42.194 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:42.194 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:42.194 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:42.194 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:42.194 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:42.194 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:42.194 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:42.194 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:42.194 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:42.194 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:42.194 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:42.194 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:42.194 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:42.194 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:42.194 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:42.194 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:42.194 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:42.194 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:42.194 + rm -f /tmp/spdk-ld-path 00:00:42.194 + source autorun-spdk.conf 00:00:42.194 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.194 ++ SPDK_TEST_NVMF=1 00:00:42.194 ++ SPDK_TEST_NVME_CLI=1 00:00:42.194 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.194 ++ SPDK_TEST_NVMF_NICS=e810 00:00:42.194 ++ SPDK_TEST_VFIOUSER=1 00:00:42.194 ++ SPDK_RUN_UBSAN=1 00:00:42.194 ++ NET_TYPE=phy 00:00:42.194 ++ RUN_NIGHTLY=0 00:00:42.194 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:42.194 + [[ -n '' ]] 00:00:42.194 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.194 + for M in /var/spdk/build-*-manifest.txt 00:00:42.194 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:42.194 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.194 + for M in /var/spdk/build-*-manifest.txt 00:00:42.194 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:42.194 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.194 ++ uname 00:00:42.194 + [[ Linux == \L\i\n\u\x ]] 00:00:42.194 + sudo dmesg -T 00:00:42.453 + sudo dmesg --clear 00:00:42.453 + dmesg_pid=2182864 00:00:42.453 + [[ Fedora Linux == FreeBSD ]] 00:00:42.453 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.453 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.453 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:42.453 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:42.453 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:42.453 + [[ -x /usr/src/fio-static/fio ]] 00:00:42.453 + export FIO_BIN=/usr/src/fio-static/fio 00:00:42.453 + FIO_BIN=/usr/src/fio-static/fio 00:00:42.453 + sudo dmesg -Tw 00:00:42.453 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:42.453 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:42.453 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:42.453 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.454 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.454 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:42.454 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.454 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.454 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.454 Test configuration: 00:00:42.454 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.454 SPDK_TEST_NVMF=1 00:00:42.454 SPDK_TEST_NVME_CLI=1 00:00:42.454 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.454 SPDK_TEST_NVMF_NICS=e810 00:00:42.454 SPDK_TEST_VFIOUSER=1 00:00:42.454 SPDK_RUN_UBSAN=1 00:00:42.454 NET_TYPE=phy 00:00:42.454 RUN_NIGHTLY=0 18:37:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:42.454 18:37:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:42.454 18:37:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:42.454 18:37:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:42.454 18:37:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.454 18:37:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.454 18:37:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.454 18:37:27 -- paths/export.sh@5 -- $ export PATH 00:00:42.454 18:37:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.454 18:37:27 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:42.454 18:37:27 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:42.454 18:37:27 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721839047.XXXXXX 00:00:42.454 18:37:27 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721839047.Ptbzbt 00:00:42.454 18:37:27 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:42.454 18:37:27 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:42.454 18:37:27 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:42.454 18:37:27 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:42.454 18:37:27 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:42.454 18:37:27 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:42.454 18:37:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:42.454 18:37:27 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.454 18:37:27 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:42.454 18:37:27 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:42.454 18:37:27 -- pm/common@17 -- $ local monitor 00:00:42.454 18:37:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.454 18:37:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.454 18:37:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.454 18:37:27 -- pm/common@21 -- $ date +%s 00:00:42.454 18:37:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.454 18:37:27 -- pm/common@21 -- $ date +%s 00:00:42.454 18:37:27 -- pm/common@25 -- $ sleep 1 00:00:42.454 18:37:27 -- pm/common@21 -- $ date +%s 00:00:42.454 18:37:27 -- pm/common@21 -- $ date +%s 00:00:42.454 18:37:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839047 00:00:42.454 18:37:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839047 00:00:42.454 18:37:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839047 00:00:42.454 18:37:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839047 00:00:42.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839047_collect-vmstat.pm.log 00:00:42.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839047_collect-cpu-load.pm.log 00:00:42.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839047_collect-cpu-temp.pm.log 00:00:42.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839047_collect-bmc-pm.bmc.pm.log 00:00:43.391 18:37:28 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:43.391 18:37:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:43.391 18:37:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:43.391 18:37:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.391 18:37:28 -- spdk/autobuild.sh@16 -- $ date -u 00:00:43.391 Wed Jul 24 04:37:28 PM UTC 2024 00:00:43.391 18:37:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:43.649 v24.09-pre-313-g0bb5c21e2 00:00:43.649 18:37:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:43.649 18:37:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.649 18:37:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.649 18:37:28 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:43.649 18:37:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:43.649 18:37:28 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.649 ************************************ 00:00:43.649 START TEST ubsan 00:00:43.649 ************************************ 00:00:43.649 18:37:28 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:43.649 using ubsan 00:00:43.649 00:00:43.649 real 0m0.000s 00:00:43.649 user 0m0.000s 00:00:43.649 sys 0m0.000s 00:00:43.649 18:37:28 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:43.649 18:37:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.649 ************************************ 00:00:43.649 END TEST ubsan 00:00:43.649 ************************************ 00:00:43.649 18:37:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:43.649 18:37:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:43.649 18:37:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:43.649 18:37:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:43.650 18:37:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:43.650 18:37:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:43.650 18:37:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:43.650 18:37:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:43.650 18:37:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:43.650 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:43.650 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:44.217 Using 'verbs' RDMA provider 00:00:57.366 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:12.328 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:12.328 Creating mk/config.mk...done. 00:01:12.328 Creating mk/cc.flags.mk...done. 00:01:12.328 Type 'make' to build. 00:01:12.328 18:37:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:12.328 18:37:55 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:12.328 18:37:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:12.328 18:37:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.328 ************************************ 00:01:12.328 START TEST make 00:01:12.328 ************************************ 00:01:12.328 18:37:55 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:12.328 make[1]: Nothing to be done for 'all'. 00:01:12.899 The Meson build system 00:01:12.899 Version: 1.3.1 00:01:12.899 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:12.899 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:12.899 Build type: native build 00:01:12.899 Project name: libvfio-user 00:01:12.899 Project version: 0.0.1 00:01:12.899 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:12.899 C linker for the host machine: cc ld.bfd 2.39-16 00:01:12.899 Host machine cpu family: x86_64 00:01:12.899 Host machine cpu: x86_64 00:01:12.899 Run-time dependency threads found: YES 00:01:12.899 Library dl found: YES 00:01:12.899 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:12.899 Run-time dependency json-c found: YES 0.17 00:01:12.899 Run-time dependency cmocka found: YES 1.1.7 00:01:12.899 Program pytest-3 found: NO 00:01:12.899 Program flake8 found: NO 00:01:12.899 Program misspell-fixer found: NO 00:01:12.899 Program restructuredtext-lint found: NO 00:01:12.899 Program valgrind found: YES (/usr/bin/valgrind) 00:01:12.899 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:12.899 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:12.899 Compiler for C supports arguments -Wwrite-strings: YES 00:01:12.899 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:12.899 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:12.899 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:12.899 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:12.899 Build targets in project: 8 00:01:12.899 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:12.899 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:12.899 00:01:12.899 libvfio-user 0.0.1 00:01:12.899 00:01:12.899 User defined options 00:01:12.899 buildtype : debug 00:01:12.899 default_library: shared 00:01:12.899 libdir : /usr/local/lib 00:01:12.899 00:01:12.899 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:13.466 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:13.466 [1/37] Compiling C object samples/null.p/null.c.o 00:01:13.466 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:13.466 [3/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:13.466 [4/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:13.466 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:13.466 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:13.466 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:13.466 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:13.466 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:13.466 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:13.725 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:13.725 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:13.725 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:13.725 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:13.725 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:13.725 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:13.725 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:13.725 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:13.725 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:13.725 [20/37] Compiling C object samples/server.p/server.c.o 00:01:13.725 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:13.725 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:13.725 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:13.725 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:13.725 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:13.725 [26/37] Compiling C object samples/client.p/client.c.o 00:01:13.725 [27/37] Linking target samples/client 00:01:13.725 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:13.725 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:13.725 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:13.984 [31/37] Linking target test/unit_tests 00:01:13.984 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:13.984 [33/37] Linking target samples/server 00:01:13.984 [34/37] Linking target samples/null 00:01:13.984 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:13.984 [36/37] Linking target samples/lspci 00:01:13.984 [37/37] Linking target samples/gpio-pci-idio-16 00:01:13.984 INFO: autodetecting backend as ninja 00:01:13.984 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:13.984 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:14.551 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:14.551 ninja: no work to do. 00:01:19.821 The Meson build system 00:01:19.821 Version: 1.3.1 00:01:19.821 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:19.821 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:19.821 Build type: native build 00:01:19.821 Program cat found: YES (/usr/bin/cat) 00:01:19.821 Project name: DPDK 00:01:19.821 Project version: 24.03.0 00:01:19.821 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:19.821 C linker for the host machine: cc ld.bfd 2.39-16 00:01:19.821 Host machine cpu family: x86_64 00:01:19.821 Host machine cpu: x86_64 00:01:19.821 Message: ## Building in Developer Mode ## 00:01:19.821 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:19.821 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:19.821 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:19.821 Program python3 found: YES (/usr/bin/python3) 00:01:19.821 Program cat found: YES (/usr/bin/cat) 00:01:19.821 Compiler for C supports arguments -march=native: YES 00:01:19.821 Checking for size of "void *" : 8 00:01:19.821 Checking for size of "void *" : 8 (cached) 00:01:19.821 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:19.821 Library m found: YES 00:01:19.821 Library numa found: YES 00:01:19.821 Has header "numaif.h" : YES 00:01:19.821 Library fdt found: NO 00:01:19.821 Library execinfo found: NO 00:01:19.821 Has header "execinfo.h" : YES 00:01:19.821 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:19.821 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:19.821 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:19.822 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:19.822 Run-time dependency openssl found: YES 3.0.9 00:01:19.822 Run-time dependency libpcap found: YES 1.10.4 00:01:19.822 Has header "pcap.h" with dependency libpcap: YES 00:01:19.822 Compiler for C supports arguments -Wcast-qual: YES 00:01:19.822 Compiler for C supports arguments -Wdeprecated: YES 00:01:19.822 Compiler for C supports arguments -Wformat: YES 00:01:19.822 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:19.822 Compiler for C supports arguments -Wformat-security: NO 00:01:19.822 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:19.822 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:19.822 Compiler for C supports arguments -Wnested-externs: YES 00:01:19.822 Compiler for C supports arguments -Wold-style-definition: YES 00:01:19.822 Compiler for C supports arguments -Wpointer-arith: YES 00:01:19.822 Compiler for C supports arguments -Wsign-compare: YES 00:01:19.822 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:19.822 Compiler for C supports arguments -Wundef: YES 00:01:19.822 Compiler for C supports arguments -Wwrite-strings: YES 00:01:19.822 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:19.822 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:19.822 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:19.822 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:19.822 Program objdump found: YES (/usr/bin/objdump) 00:01:19.822 Compiler for C supports arguments -mavx512f: YES 00:01:19.822 Checking if "AVX512 checking" compiles: YES 00:01:19.822 Fetching value of define "__SSE4_2__" : 1 00:01:19.822 Fetching value of define "__AES__" : 1 00:01:19.822 Fetching value of define "__AVX__" : 1 00:01:19.822 Fetching value of define "__AVX2__" : 1 00:01:19.822 Fetching value of define "__AVX512BW__" : 1 00:01:19.822 Fetching value of define "__AVX512CD__" : 1 00:01:19.822 Fetching value of define "__AVX512DQ__" : 1 00:01:19.822 Fetching value of define "__AVX512F__" : 1 00:01:19.822 Fetching value of define "__AVX512VL__" : 1 00:01:19.822 Fetching value of define "__PCLMUL__" : 1 00:01:19.822 Fetching value of define "__RDRND__" : 1 00:01:19.822 Fetching value of define "__RDSEED__" : 1 00:01:19.822 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:19.822 Fetching value of define "__znver1__" : (undefined) 00:01:19.822 Fetching value of define "__znver2__" : (undefined) 00:01:19.822 Fetching value of define "__znver3__" : (undefined) 00:01:19.822 Fetching value of define "__znver4__" : (undefined) 00:01:19.822 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:19.822 Message: lib/log: Defining dependency "log" 00:01:19.822 Message: lib/kvargs: Defining dependency "kvargs" 00:01:19.822 Message: lib/telemetry: Defining dependency "telemetry" 00:01:19.822 Checking for function "getentropy" : NO 00:01:19.822 Message: lib/eal: Defining dependency "eal" 00:01:19.822 Message: lib/ring: Defining dependency "ring" 00:01:19.822 Message: lib/rcu: Defining dependency "rcu" 00:01:19.822 Message: lib/mempool: Defining dependency "mempool" 00:01:19.822 Message: lib/mbuf: Defining dependency "mbuf" 00:01:19.822 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:19.822 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:19.822 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:19.822 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:19.822 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:19.822 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:19.822 Compiler for C supports arguments -mpclmul: YES 00:01:19.822 Compiler for C supports arguments -maes: YES 00:01:19.822 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:19.822 Compiler for C supports arguments -mavx512bw: YES 00:01:19.822 Compiler for C supports arguments -mavx512dq: YES 00:01:19.822 Compiler for C supports arguments -mavx512vl: YES 00:01:19.822 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:19.822 Compiler for C supports arguments -mavx2: YES 00:01:19.822 Compiler for C supports arguments -mavx: YES 00:01:19.822 Message: lib/net: Defining dependency "net" 00:01:19.822 Message: lib/meter: Defining dependency "meter" 00:01:19.822 Message: lib/ethdev: Defining dependency "ethdev" 00:01:19.822 Message: lib/pci: Defining dependency "pci" 00:01:19.822 Message: lib/cmdline: Defining dependency "cmdline" 00:01:19.822 Message: lib/hash: Defining dependency "hash" 00:01:19.822 Message: lib/timer: Defining dependency "timer" 00:01:19.822 Message: lib/compressdev: Defining dependency "compressdev" 00:01:19.822 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:19.822 Message: lib/dmadev: Defining dependency "dmadev" 00:01:19.822 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:19.822 Message: lib/power: Defining dependency "power" 00:01:19.822 Message: lib/reorder: Defining dependency "reorder" 00:01:19.822 Message: lib/security: Defining dependency "security" 00:01:19.822 Has header "linux/userfaultfd.h" : YES 00:01:19.822 Has header "linux/vduse.h" : YES 00:01:19.822 Message: lib/vhost: Defining dependency "vhost" 00:01:19.822 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:19.822 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:19.822 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:19.822 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:19.822 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:19.822 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:19.822 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:19.822 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:19.822 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:19.822 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:19.822 Program doxygen found: YES (/usr/bin/doxygen) 00:01:19.822 Configuring doxy-api-html.conf using configuration 00:01:19.822 Configuring doxy-api-man.conf using configuration 00:01:19.822 Program mandb found: YES (/usr/bin/mandb) 00:01:19.822 Program sphinx-build found: NO 00:01:19.822 Configuring rte_build_config.h using configuration 00:01:19.822 Message: 00:01:19.822 ================= 00:01:19.822 Applications Enabled 00:01:19.822 ================= 00:01:19.822 00:01:19.822 apps: 00:01:19.822 00:01:19.822 00:01:19.822 Message: 00:01:19.822 ================= 00:01:19.822 Libraries Enabled 00:01:19.822 ================= 00:01:19.822 00:01:19.822 libs: 00:01:19.822 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:19.822 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:19.822 cryptodev, dmadev, power, reorder, security, vhost, 00:01:19.822 00:01:19.822 Message: 00:01:19.822 =============== 00:01:19.822 Drivers Enabled 00:01:19.822 =============== 00:01:19.822 00:01:19.822 common: 00:01:19.822 00:01:19.822 bus: 00:01:19.822 pci, vdev, 00:01:19.822 mempool: 00:01:19.822 ring, 00:01:19.822 dma: 00:01:19.822 00:01:19.822 net: 00:01:19.822 00:01:19.822 crypto: 00:01:19.822 00:01:19.822 compress: 00:01:19.822 00:01:19.822 vdpa: 00:01:19.822 00:01:19.822 00:01:19.822 Message: 00:01:19.822 ================= 00:01:19.822 Content Skipped 00:01:19.822 ================= 00:01:19.822 00:01:19.822 apps: 00:01:19.822 dumpcap: explicitly disabled via build config 00:01:19.822 graph: explicitly disabled via build config 00:01:19.822 pdump: explicitly disabled via build config 00:01:19.822 proc-info: explicitly disabled via build config 00:01:19.822 test-acl: explicitly disabled via build config 00:01:19.822 test-bbdev: explicitly disabled via build config 00:01:19.822 test-cmdline: explicitly disabled via build config 00:01:19.822 test-compress-perf: explicitly disabled via build config 00:01:19.822 test-crypto-perf: explicitly disabled via build config 00:01:19.822 test-dma-perf: explicitly disabled via build config 00:01:19.822 test-eventdev: explicitly disabled via build config 00:01:19.822 test-fib: explicitly disabled via build config 00:01:19.822 test-flow-perf: explicitly disabled via build config 00:01:19.822 test-gpudev: explicitly disabled via build config 00:01:19.822 test-mldev: explicitly disabled via build config 00:01:19.822 test-pipeline: explicitly disabled via build config 00:01:19.822 test-pmd: explicitly disabled via build config 00:01:19.822 test-regex: explicitly disabled via build config 00:01:19.822 test-sad: explicitly disabled via build config 00:01:19.822 test-security-perf: explicitly disabled via build config 00:01:19.822 00:01:19.822 libs: 00:01:19.822 argparse: explicitly disabled via build config 00:01:19.822 metrics: explicitly disabled via build config 00:01:19.822 acl: explicitly disabled via build config 00:01:19.822 bbdev: explicitly disabled via build config 00:01:19.822 bitratestats: explicitly disabled via build config 00:01:19.822 bpf: explicitly disabled via build config 00:01:19.822 cfgfile: explicitly disabled via build config 00:01:19.822 distributor: explicitly disabled via build config 00:01:19.822 efd: explicitly disabled via build config 00:01:19.822 eventdev: explicitly disabled via build config 00:01:19.822 dispatcher: explicitly disabled via build config 00:01:19.822 gpudev: explicitly disabled via build config 00:01:19.822 gro: explicitly disabled via build config 00:01:19.822 gso: explicitly disabled via build config 00:01:19.822 ip_frag: explicitly disabled via build config 00:01:19.822 jobstats: explicitly disabled via build config 00:01:19.822 latencystats: explicitly disabled via build config 00:01:19.822 lpm: explicitly disabled via build config 00:01:19.822 member: explicitly disabled via build config 00:01:19.823 pcapng: explicitly disabled via build config 00:01:19.823 rawdev: explicitly disabled via build config 00:01:19.823 regexdev: explicitly disabled via build config 00:01:19.823 mldev: explicitly disabled via build config 00:01:19.823 rib: explicitly disabled via build config 00:01:19.823 sched: explicitly disabled via build config 00:01:19.823 stack: explicitly disabled via build config 00:01:19.823 ipsec: explicitly disabled via build config 00:01:19.823 pdcp: explicitly disabled via build config 00:01:19.823 fib: explicitly disabled via build config 00:01:19.823 port: explicitly disabled via build config 00:01:19.823 pdump: explicitly disabled via build config 00:01:19.823 table: explicitly disabled via build config 00:01:19.823 pipeline: explicitly disabled via build config 00:01:19.823 graph: explicitly disabled via build config 00:01:19.823 node: explicitly disabled via build config 00:01:19.823 00:01:19.823 drivers: 00:01:19.823 common/cpt: not in enabled drivers build config 00:01:19.823 common/dpaax: not in enabled drivers build config 00:01:19.823 common/iavf: not in enabled drivers build config 00:01:19.823 common/idpf: not in enabled drivers build config 00:01:19.823 common/ionic: not in enabled drivers build config 00:01:19.823 common/mvep: not in enabled drivers build config 00:01:19.823 common/octeontx: not in enabled drivers build config 00:01:19.823 bus/auxiliary: not in enabled drivers build config 00:01:19.823 bus/cdx: not in enabled drivers build config 00:01:19.823 bus/dpaa: not in enabled drivers build config 00:01:19.823 bus/fslmc: not in enabled drivers build config 00:01:19.823 bus/ifpga: not in enabled drivers build config 00:01:19.823 bus/platform: not in enabled drivers build config 00:01:19.823 bus/uacce: not in enabled drivers build config 00:01:19.823 bus/vmbus: not in enabled drivers build config 00:01:19.823 common/cnxk: not in enabled drivers build config 00:01:19.823 common/mlx5: not in enabled drivers build config 00:01:19.823 common/nfp: not in enabled drivers build config 00:01:19.823 common/nitrox: not in enabled drivers build config 00:01:19.823 common/qat: not in enabled drivers build config 00:01:19.823 common/sfc_efx: not in enabled drivers build config 00:01:19.823 mempool/bucket: not in enabled drivers build config 00:01:19.823 mempool/cnxk: not in enabled drivers build config 00:01:19.823 mempool/dpaa: not in enabled drivers build config 00:01:19.823 mempool/dpaa2: not in enabled drivers build config 00:01:19.823 mempool/octeontx: not in enabled drivers build config 00:01:19.823 mempool/stack: not in enabled drivers build config 00:01:19.823 dma/cnxk: not in enabled drivers build config 00:01:19.823 dma/dpaa: not in enabled drivers build config 00:01:19.823 dma/dpaa2: not in enabled drivers build config 00:01:19.823 dma/hisilicon: not in enabled drivers build config 00:01:19.823 dma/idxd: not in enabled drivers build config 00:01:19.823 dma/ioat: not in enabled drivers build config 00:01:19.823 dma/skeleton: not in enabled drivers build config 00:01:19.823 net/af_packet: not in enabled drivers build config 00:01:19.823 net/af_xdp: not in enabled drivers build config 00:01:19.823 net/ark: not in enabled drivers build config 00:01:19.823 net/atlantic: not in enabled drivers build config 00:01:19.823 net/avp: not in enabled drivers build config 00:01:19.823 net/axgbe: not in enabled drivers build config 00:01:19.823 net/bnx2x: not in enabled drivers build config 00:01:19.823 net/bnxt: not in enabled drivers build config 00:01:19.823 net/bonding: not in enabled drivers build config 00:01:19.823 net/cnxk: not in enabled drivers build config 00:01:19.823 net/cpfl: not in enabled drivers build config 00:01:19.823 net/cxgbe: not in enabled drivers build config 00:01:19.823 net/dpaa: not in enabled drivers build config 00:01:19.823 net/dpaa2: not in enabled drivers build config 00:01:19.823 net/e1000: not in enabled drivers build config 00:01:19.823 net/ena: not in enabled drivers build config 00:01:19.823 net/enetc: not in enabled drivers build config 00:01:19.823 net/enetfec: not in enabled drivers build config 00:01:19.823 net/enic: not in enabled drivers build config 00:01:19.823 net/failsafe: not in enabled drivers build config 00:01:19.823 net/fm10k: not in enabled drivers build config 00:01:19.823 net/gve: not in enabled drivers build config 00:01:19.823 net/hinic: not in enabled drivers build config 00:01:19.823 net/hns3: not in enabled drivers build config 00:01:19.823 net/i40e: not in enabled drivers build config 00:01:19.823 net/iavf: not in enabled drivers build config 00:01:19.823 net/ice: not in enabled drivers build config 00:01:19.823 net/idpf: not in enabled drivers build config 00:01:19.823 net/igc: not in enabled drivers build config 00:01:19.823 net/ionic: not in enabled drivers build config 00:01:19.823 net/ipn3ke: not in enabled drivers build config 00:01:19.823 net/ixgbe: not in enabled drivers build config 00:01:19.823 net/mana: not in enabled drivers build config 00:01:19.823 net/memif: not in enabled drivers build config 00:01:19.823 net/mlx4: not in enabled drivers build config 00:01:19.823 net/mlx5: not in enabled drivers build config 00:01:19.823 net/mvneta: not in enabled drivers build config 00:01:19.823 net/mvpp2: not in enabled drivers build config 00:01:19.823 net/netvsc: not in enabled drivers build config 00:01:19.823 net/nfb: not in enabled drivers build config 00:01:19.823 net/nfp: not in enabled drivers build config 00:01:19.823 net/ngbe: not in enabled drivers build config 00:01:19.823 net/null: not in enabled drivers build config 00:01:19.823 net/octeontx: not in enabled drivers build config 00:01:19.823 net/octeon_ep: not in enabled drivers build config 00:01:19.823 net/pcap: not in enabled drivers build config 00:01:19.823 net/pfe: not in enabled drivers build config 00:01:19.823 net/qede: not in enabled drivers build config 00:01:19.823 net/ring: not in enabled drivers build config 00:01:19.823 net/sfc: not in enabled drivers build config 00:01:19.823 net/softnic: not in enabled drivers build config 00:01:19.823 net/tap: not in enabled drivers build config 00:01:19.823 net/thunderx: not in enabled drivers build config 00:01:19.823 net/txgbe: not in enabled drivers build config 00:01:19.823 net/vdev_netvsc: not in enabled drivers build config 00:01:19.823 net/vhost: not in enabled drivers build config 00:01:19.823 net/virtio: not in enabled drivers build config 00:01:19.823 net/vmxnet3: not in enabled drivers build config 00:01:19.823 raw/*: missing internal dependency, "rawdev" 00:01:19.823 crypto/armv8: not in enabled drivers build config 00:01:19.823 crypto/bcmfs: not in enabled drivers build config 00:01:19.823 crypto/caam_jr: not in enabled drivers build config 00:01:19.823 crypto/ccp: not in enabled drivers build config 00:01:19.823 crypto/cnxk: not in enabled drivers build config 00:01:19.823 crypto/dpaa_sec: not in enabled drivers build config 00:01:19.823 crypto/dpaa2_sec: not in enabled drivers build config 00:01:19.823 crypto/ipsec_mb: not in enabled drivers build config 00:01:19.823 crypto/mlx5: not in enabled drivers build config 00:01:19.823 crypto/mvsam: not in enabled drivers build config 00:01:19.823 crypto/nitrox: not in enabled drivers build config 00:01:19.823 crypto/null: not in enabled drivers build config 00:01:19.823 crypto/octeontx: not in enabled drivers build config 00:01:19.823 crypto/openssl: not in enabled drivers build config 00:01:19.823 crypto/scheduler: not in enabled drivers build config 00:01:19.823 crypto/uadk: not in enabled drivers build config 00:01:19.823 crypto/virtio: not in enabled drivers build config 00:01:19.823 compress/isal: not in enabled drivers build config 00:01:19.823 compress/mlx5: not in enabled drivers build config 00:01:19.823 compress/nitrox: not in enabled drivers build config 00:01:19.823 compress/octeontx: not in enabled drivers build config 00:01:19.823 compress/zlib: not in enabled drivers build config 00:01:19.823 regex/*: missing internal dependency, "regexdev" 00:01:19.823 ml/*: missing internal dependency, "mldev" 00:01:19.823 vdpa/ifc: not in enabled drivers build config 00:01:19.823 vdpa/mlx5: not in enabled drivers build config 00:01:19.823 vdpa/nfp: not in enabled drivers build config 00:01:19.823 vdpa/sfc: not in enabled drivers build config 00:01:19.823 event/*: missing internal dependency, "eventdev" 00:01:19.823 baseband/*: missing internal dependency, "bbdev" 00:01:19.823 gpu/*: missing internal dependency, "gpudev" 00:01:19.823 00:01:19.823 00:01:20.081 Build targets in project: 85 00:01:20.081 00:01:20.081 DPDK 24.03.0 00:01:20.081 00:01:20.081 User defined options 00:01:20.081 buildtype : debug 00:01:20.081 default_library : shared 00:01:20.081 libdir : lib 00:01:20.082 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:20.082 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:20.082 c_link_args : 00:01:20.082 cpu_instruction_set: native 00:01:20.082 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:20.082 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:20.082 enable_docs : false 00:01:20.082 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:20.082 enable_kmods : false 00:01:20.082 max_lcores : 128 00:01:20.082 tests : false 00:01:20.082 00:01:20.082 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:20.660 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:20.660 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:20.660 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:20.660 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:20.660 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:20.920 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:20.920 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:20.920 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:20.920 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:20.920 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:20.920 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:20.920 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:20.920 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:20.920 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:20.920 [14/268] Linking static target lib/librte_log.a 00:01:20.920 [15/268] Linking static target lib/librte_kvargs.a 00:01:20.920 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:20.920 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:20.920 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:20.920 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:20.920 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:20.920 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:20.920 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:20.920 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:20.921 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:20.921 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:20.921 [26/268] Linking static target lib/librte_pci.a 00:01:20.921 [27/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:20.921 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:20.921 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:20.921 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:21.180 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:21.180 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:21.180 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:21.180 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:21.180 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:21.180 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:21.476 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:21.476 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:21.476 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:21.476 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:21.476 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:21.476 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:21.476 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:21.476 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:21.476 [45/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:21.476 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:21.476 [47/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:21.476 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:21.476 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:21.476 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:21.476 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:21.476 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:21.476 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:21.476 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:21.476 [55/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:21.476 [56/268] Linking static target lib/librte_ring.a 00:01:21.476 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:21.476 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:21.476 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:21.476 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:21.476 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:21.476 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:21.476 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:21.476 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:21.476 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:21.476 [66/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:21.476 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:21.476 [68/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:21.476 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:21.476 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:21.476 [71/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:21.476 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:21.476 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:21.476 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:21.476 [75/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:21.476 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:21.476 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:21.477 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:21.477 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:21.477 [80/268] Linking static target lib/librte_telemetry.a 00:01:21.477 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:21.477 [82/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.477 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:21.477 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:21.477 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:21.477 [86/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:21.477 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:21.477 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:21.477 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:21.477 [90/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:21.477 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:21.477 [92/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:21.477 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:21.477 [94/268] Linking static target lib/librte_meter.a 00:01:21.477 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:21.477 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:21.477 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:21.477 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:21.477 [99/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.477 [100/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:21.477 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:21.477 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:21.477 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:21.477 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:21.477 [105/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:21.477 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:21.477 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:21.477 [108/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:21.477 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:21.477 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:21.477 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:21.477 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:21.477 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:21.477 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:21.477 [115/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:21.477 [116/268] Linking static target lib/librte_cmdline.a 00:01:21.477 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:21.477 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:21.477 [119/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:21.735 [120/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:21.735 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:21.735 [122/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:21.735 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:21.735 [124/268] Linking static target lib/librte_mempool.a 00:01:21.735 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:21.735 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:21.735 [127/268] Linking static target lib/librte_rcu.a 00:01:21.735 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:21.735 [129/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:21.736 [130/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:21.736 [131/268] Linking static target lib/librte_dmadev.a 00:01:21.736 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:21.736 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:21.736 [134/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:21.736 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:21.736 [136/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:21.736 [137/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:21.736 [138/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:21.736 [139/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:21.736 [140/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:21.736 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:21.736 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:21.736 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:21.736 [144/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.736 [145/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.736 [146/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.736 [147/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:21.736 [148/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:21.736 [149/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:21.736 [150/268] Linking static target lib/librte_mbuf.a 00:01:21.736 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:21.736 [152/268] Linking target lib/librte_log.so.24.1 00:01:21.994 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:21.994 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:21.994 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:21.994 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:21.994 [157/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:21.994 [158/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:21.994 [159/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:21.994 [160/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:21.994 [161/268] Linking static target lib/librte_net.a 00:01:21.994 [162/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:21.994 [163/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:21.994 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:21.994 [165/268] Linking static target lib/librte_eal.a 00:01:21.994 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:21.994 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:21.994 [168/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.994 [169/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:21.994 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:21.994 [171/268] Linking static target lib/librte_security.a 00:01:21.994 [172/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:21.994 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:21.994 [174/268] Linking target lib/librte_kvargs.so.24.1 00:01:21.994 [175/268] Linking static target lib/librte_power.a 00:01:21.994 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:21.994 [177/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:21.994 [178/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.994 [179/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:21.994 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:21.994 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:21.994 [182/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:21.994 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:21.994 [184/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:21.994 [185/268] Linking static target lib/librte_reorder.a 00:01:21.994 [186/268] Linking target lib/librte_telemetry.so.24.1 00:01:22.252 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:22.252 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:22.252 [189/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:22.252 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:22.252 [191/268] Linking static target lib/librte_timer.a 00:01:22.252 [192/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.252 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:22.252 [194/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.252 [195/268] Linking static target drivers/librte_bus_vdev.a 00:01:22.252 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:22.252 [197/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:22.252 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:22.252 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:22.253 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:22.253 [201/268] Linking static target lib/librte_hash.a 00:01:22.253 [202/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:22.253 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.253 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.253 [205/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.253 [206/268] Linking static target lib/librte_compressdev.a 00:01:22.253 [207/268] Linking static target drivers/librte_mempool_ring.a 00:01:22.253 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.253 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.253 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:22.253 [211/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.511 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.511 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.511 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.511 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.769 [216/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:22.769 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.769 [218/268] Linking static target lib/librte_cryptodev.a 00:01:22.769 [219/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:22.769 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:22.769 [221/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.769 [222/268] Linking static target lib/librte_ethdev.a 00:01:22.769 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.028 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.028 [225/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.028 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.286 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.665 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.925 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:24.925 [230/268] Linking static target lib/librte_vhost.a 00:01:26.832 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.110 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.679 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.679 [234/268] Linking target lib/librte_eal.so.24.1 00:01:32.679 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:32.679 [236/268] Linking target lib/librte_pci.so.24.1 00:01:32.679 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:32.679 [238/268] Linking target lib/librte_ring.so.24.1 00:01:32.948 [239/268] Linking target lib/librte_meter.so.24.1 00:01:32.948 [240/268] Linking target lib/librte_timer.so.24.1 00:01:32.948 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:32.948 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:32.949 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:32.949 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:32.949 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:32.949 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:32.949 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:32.949 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:32.949 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:33.252 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:33.252 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:33.252 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:33.252 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:33.252 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:33.511 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:33.511 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:33.511 [257/268] Linking target lib/librte_net.so.24.1 00:01:33.511 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:33.511 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:33.511 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:33.770 [261/268] Linking target lib/librte_security.so.24.1 00:01:33.770 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:33.770 [263/268] Linking target lib/librte_hash.so.24.1 00:01:33.770 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:33.770 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:33.770 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:33.770 [267/268] Linking target lib/librte_power.so.24.1 00:01:34.029 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:34.029 INFO: autodetecting backend as ninja 00:01:34.029 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:34.967 CC lib/log/log.o 00:01:34.968 CC lib/log/log_flags.o 00:01:34.968 CC lib/log/log_deprecated.o 00:01:34.968 CC lib/ut/ut.o 00:01:35.227 CC lib/ut_mock/mock.o 00:01:35.227 LIB libspdk_ut.a 00:01:35.227 LIB libspdk_ut_mock.a 00:01:35.227 SO libspdk_ut.so.2.0 00:01:35.227 SO libspdk_ut_mock.so.6.0 00:01:35.485 SYMLINK libspdk_ut.so 00:01:35.485 SYMLINK libspdk_ut_mock.so 00:01:35.485 LIB libspdk_log.a 00:01:35.485 SO libspdk_log.so.7.0 00:01:35.485 SYMLINK libspdk_log.so 00:01:36.054 CC lib/dma/dma.o 00:01:36.054 CC lib/ioat/ioat.o 00:01:36.054 CXX lib/trace_parser/trace.o 00:01:36.054 CC lib/util/base64.o 00:01:36.054 CC lib/util/bit_array.o 00:01:36.054 CC lib/util/cpuset.o 00:01:36.054 CC lib/util/crc16.o 00:01:36.054 CC lib/util/crc32.o 00:01:36.054 CC lib/util/crc32c.o 00:01:36.054 CC lib/util/crc32_ieee.o 00:01:36.054 CC lib/util/crc64.o 00:01:36.054 CC lib/util/dif.o 00:01:36.054 CC lib/util/fd.o 00:01:36.054 CC lib/util/fd_group.o 00:01:36.054 CC lib/util/file.o 00:01:36.054 CC lib/util/hexlify.o 00:01:36.054 CC lib/util/iov.o 00:01:36.054 CC lib/util/math.o 00:01:36.054 CC lib/util/net.o 00:01:36.054 CC lib/util/pipe.o 00:01:36.054 CC lib/util/strerror_tls.o 00:01:36.054 CC lib/util/string.o 00:01:36.054 CC lib/util/uuid.o 00:01:36.054 CC lib/util/xor.o 00:01:36.054 CC lib/util/zipf.o 00:01:36.054 CC lib/vfio_user/host/vfio_user_pci.o 00:01:36.054 CC lib/vfio_user/host/vfio_user.o 00:01:36.054 LIB libspdk_dma.a 00:01:36.054 SO libspdk_dma.so.4.0 00:01:36.313 LIB libspdk_ioat.a 00:01:36.313 SYMLINK libspdk_dma.so 00:01:36.313 SO libspdk_ioat.so.7.0 00:01:36.313 SYMLINK libspdk_ioat.so 00:01:36.313 LIB libspdk_vfio_user.a 00:01:36.313 SO libspdk_vfio_user.so.5.0 00:01:36.572 SYMLINK libspdk_vfio_user.so 00:01:36.572 LIB libspdk_util.a 00:01:36.572 SO libspdk_util.so.10.0 00:01:36.572 SYMLINK libspdk_util.so 00:01:36.831 LIB libspdk_trace_parser.a 00:01:37.090 SO libspdk_trace_parser.so.5.0 00:01:37.090 CC lib/conf/conf.o 00:01:37.090 CC lib/json/json_parse.o 00:01:37.090 CC lib/env_dpdk/env.o 00:01:37.090 CC lib/json/json_write.o 00:01:37.090 CC lib/json/json_util.o 00:01:37.090 CC lib/env_dpdk/memory.o 00:01:37.090 CC lib/env_dpdk/pci.o 00:01:37.090 CC lib/env_dpdk/init.o 00:01:37.090 CC lib/env_dpdk/pci_ioat.o 00:01:37.090 CC lib/env_dpdk/threads.o 00:01:37.090 CC lib/env_dpdk/pci_virtio.o 00:01:37.090 CC lib/vmd/led.o 00:01:37.090 CC lib/env_dpdk/pci_vmd.o 00:01:37.090 CC lib/vmd/vmd.o 00:01:37.090 CC lib/env_dpdk/pci_idxd.o 00:01:37.090 CC lib/env_dpdk/pci_event.o 00:01:37.090 CC lib/env_dpdk/sigbus_handler.o 00:01:37.090 CC lib/rdma_utils/rdma_utils.o 00:01:37.090 CC lib/env_dpdk/pci_dpdk.o 00:01:37.090 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:37.090 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:37.090 CC lib/idxd/idxd.o 00:01:37.090 CC lib/idxd/idxd_user.o 00:01:37.090 CC lib/idxd/idxd_kernel.o 00:01:37.090 CC lib/rdma_provider/common.o 00:01:37.090 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:37.090 SYMLINK libspdk_trace_parser.so 00:01:37.349 LIB libspdk_rdma_provider.a 00:01:37.349 SO libspdk_rdma_provider.so.6.0 00:01:37.349 LIB libspdk_rdma_utils.a 00:01:37.349 LIB libspdk_json.a 00:01:37.349 SYMLINK libspdk_rdma_provider.so 00:01:37.349 SO libspdk_rdma_utils.so.1.0 00:01:37.349 SO libspdk_json.so.6.0 00:01:37.349 SYMLINK libspdk_rdma_utils.so 00:01:37.607 LIB libspdk_conf.a 00:01:37.607 SYMLINK libspdk_json.so 00:01:37.607 LIB libspdk_vmd.a 00:01:37.607 SO libspdk_conf.so.6.0 00:01:37.607 SO libspdk_vmd.so.6.0 00:01:37.607 SYMLINK libspdk_conf.so 00:01:37.607 LIB libspdk_idxd.a 00:01:37.607 SYMLINK libspdk_vmd.so 00:01:37.607 SO libspdk_idxd.so.12.0 00:01:37.866 SYMLINK libspdk_idxd.so 00:01:37.866 CC lib/jsonrpc/jsonrpc_server.o 00:01:37.866 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:37.866 CC lib/jsonrpc/jsonrpc_client.o 00:01:37.866 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:37.866 LIB libspdk_env_dpdk.a 00:01:38.126 SO libspdk_env_dpdk.so.15.0 00:01:38.126 LIB libspdk_jsonrpc.a 00:01:38.126 SO libspdk_jsonrpc.so.6.0 00:01:38.126 SYMLINK libspdk_env_dpdk.so 00:01:38.126 SYMLINK libspdk_jsonrpc.so 00:01:38.694 CC lib/rpc/rpc.o 00:01:38.694 LIB libspdk_rpc.a 00:01:38.694 SO libspdk_rpc.so.6.0 00:01:38.953 SYMLINK libspdk_rpc.so 00:01:39.212 CC lib/trace/trace.o 00:01:39.212 CC lib/trace/trace_flags.o 00:01:39.212 CC lib/trace/trace_rpc.o 00:01:39.212 CC lib/notify/notify.o 00:01:39.212 CC lib/notify/notify_rpc.o 00:01:39.212 CC lib/keyring/keyring.o 00:01:39.212 CC lib/keyring/keyring_rpc.o 00:01:39.472 LIB libspdk_keyring.a 00:01:39.472 LIB libspdk_trace.a 00:01:39.472 SO libspdk_keyring.so.1.0 00:01:39.472 LIB libspdk_notify.a 00:01:39.472 SO libspdk_trace.so.10.0 00:01:39.472 SO libspdk_notify.so.6.0 00:01:39.472 SYMLINK libspdk_trace.so 00:01:39.731 SYMLINK libspdk_keyring.so 00:01:39.731 SYMLINK libspdk_notify.so 00:01:39.990 CC lib/sock/sock.o 00:01:39.990 CC lib/sock/sock_rpc.o 00:01:39.990 CC lib/thread/thread.o 00:01:39.990 CC lib/thread/iobuf.o 00:01:40.250 LIB libspdk_sock.a 00:01:40.250 SO libspdk_sock.so.10.0 00:01:40.250 SYMLINK libspdk_sock.so 00:01:40.509 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:40.509 CC lib/nvme/nvme_ctrlr.o 00:01:40.509 CC lib/nvme/nvme_fabric.o 00:01:40.509 CC lib/nvme/nvme_ns_cmd.o 00:01:40.509 CC lib/nvme/nvme_ns.o 00:01:40.509 CC lib/nvme/nvme_pcie_common.o 00:01:40.509 CC lib/nvme/nvme_pcie.o 00:01:40.509 CC lib/nvme/nvme_qpair.o 00:01:40.509 CC lib/nvme/nvme.o 00:01:40.509 CC lib/nvme/nvme_quirks.o 00:01:40.509 CC lib/nvme/nvme_transport.o 00:01:40.509 CC lib/nvme/nvme_discovery.o 00:01:40.509 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:40.509 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:40.509 CC lib/nvme/nvme_tcp.o 00:01:40.509 CC lib/nvme/nvme_opal.o 00:01:40.509 CC lib/nvme/nvme_io_msg.o 00:01:40.509 CC lib/nvme/nvme_poll_group.o 00:01:40.509 CC lib/nvme/nvme_zns.o 00:01:40.509 CC lib/nvme/nvme_stubs.o 00:01:40.509 CC lib/nvme/nvme_auth.o 00:01:40.509 CC lib/nvme/nvme_cuse.o 00:01:40.509 CC lib/nvme/nvme_vfio_user.o 00:01:40.509 CC lib/nvme/nvme_rdma.o 00:01:43.046 LIB libspdk_nvme.a 00:01:43.046 SO libspdk_nvme.so.13.1 00:01:43.046 LIB libspdk_thread.a 00:01:43.046 SO libspdk_thread.so.10.1 00:01:43.304 SYMLINK libspdk_thread.so 00:01:43.304 SYMLINK libspdk_nvme.so 00:01:43.563 CC lib/accel/accel.o 00:01:43.563 CC lib/accel/accel_rpc.o 00:01:43.563 CC lib/accel/accel_sw.o 00:01:43.563 CC lib/init/json_config.o 00:01:43.563 CC lib/vfu_tgt/tgt_endpoint.o 00:01:43.563 CC lib/init/subsystem.o 00:01:43.563 CC lib/vfu_tgt/tgt_rpc.o 00:01:43.563 CC lib/blob/blobstore.o 00:01:43.563 CC lib/init/subsystem_rpc.o 00:01:43.563 CC lib/blob/request.o 00:01:43.563 CC lib/virtio/virtio.o 00:01:43.563 CC lib/init/rpc.o 00:01:43.563 CC lib/virtio/virtio_vhost_user.o 00:01:43.563 CC lib/blob/zeroes.o 00:01:43.563 CC lib/virtio/virtio_vfio_user.o 00:01:43.563 CC lib/blob/blob_bs_dev.o 00:01:43.563 CC lib/virtio/virtio_pci.o 00:01:43.823 LIB libspdk_init.a 00:01:43.823 SO libspdk_init.so.5.0 00:01:43.823 LIB libspdk_virtio.a 00:01:43.823 LIB libspdk_vfu_tgt.a 00:01:43.823 SYMLINK libspdk_init.so 00:01:43.823 SO libspdk_virtio.so.7.0 00:01:43.823 SO libspdk_vfu_tgt.so.3.0 00:01:44.081 SYMLINK libspdk_vfu_tgt.so 00:01:44.081 SYMLINK libspdk_virtio.so 00:01:44.081 CC lib/event/app.o 00:01:44.341 CC lib/event/reactor.o 00:01:44.341 CC lib/event/log_rpc.o 00:01:44.341 CC lib/event/app_rpc.o 00:01:44.341 CC lib/event/scheduler_static.o 00:01:44.600 LIB libspdk_accel.a 00:01:44.600 SO libspdk_accel.so.16.0 00:01:44.600 SYMLINK libspdk_accel.so 00:01:44.860 LIB libspdk_event.a 00:01:44.860 SO libspdk_event.so.14.0 00:01:44.860 SYMLINK libspdk_event.so 00:01:44.860 CC lib/bdev/bdev.o 00:01:44.860 CC lib/bdev/bdev_rpc.o 00:01:44.860 CC lib/bdev/bdev_zone.o 00:01:44.860 CC lib/bdev/part.o 00:01:44.860 CC lib/bdev/scsi_nvme.o 00:01:46.767 LIB libspdk_bdev.a 00:01:46.767 SO libspdk_bdev.so.16.0 00:01:46.767 SYMLINK libspdk_bdev.so 00:01:47.026 CC lib/nvmf/ctrlr.o 00:01:47.026 CC lib/nvmf/ctrlr_discovery.o 00:01:47.026 CC lib/nvmf/ctrlr_bdev.o 00:01:47.026 CC lib/nvmf/subsystem.o 00:01:47.026 CC lib/nvmf/nvmf_rpc.o 00:01:47.026 CC lib/nvmf/nvmf.o 00:01:47.026 CC lib/nvmf/transport.o 00:01:47.026 CC lib/nvmf/tcp.o 00:01:47.026 CC lib/nbd/nbd.o 00:01:47.026 CC lib/nvmf/stubs.o 00:01:47.026 CC lib/nbd/nbd_rpc.o 00:01:47.026 CC lib/nvmf/mdns_server.o 00:01:47.026 CC lib/nvmf/vfio_user.o 00:01:47.026 CC lib/nvmf/auth.o 00:01:47.026 CC lib/nvmf/rdma.o 00:01:47.026 CC lib/ftl/ftl_core.o 00:01:47.026 CC lib/ublk/ublk.o 00:01:47.026 CC lib/scsi/dev.o 00:01:47.026 CC lib/ftl/ftl_init.o 00:01:47.026 CC lib/scsi/lun.o 00:01:47.026 CC lib/ublk/ublk_rpc.o 00:01:47.026 CC lib/scsi/port.o 00:01:47.026 CC lib/ftl/ftl_layout.o 00:01:47.026 CC lib/scsi/scsi.o 00:01:47.026 CC lib/ftl/ftl_debug.o 00:01:47.026 CC lib/scsi/scsi_bdev.o 00:01:47.026 CC lib/ftl/ftl_io.o 00:01:47.026 CC lib/scsi/scsi_pr.o 00:01:47.026 CC lib/ftl/ftl_sb.o 00:01:47.026 CC lib/scsi/scsi_rpc.o 00:01:47.026 CC lib/ftl/ftl_l2p.o 00:01:47.026 CC lib/scsi/task.o 00:01:47.026 CC lib/ftl/ftl_l2p_flat.o 00:01:47.026 CC lib/ftl/ftl_nv_cache.o 00:01:47.026 CC lib/ftl/ftl_band.o 00:01:47.026 CC lib/ftl/ftl_writer.o 00:01:47.026 CC lib/ftl/ftl_band_ops.o 00:01:47.026 CC lib/ftl/ftl_rq.o 00:01:47.026 CC lib/ftl/ftl_reloc.o 00:01:47.026 CC lib/ftl/ftl_l2p_cache.o 00:01:47.026 CC lib/ftl/ftl_p2l.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:47.026 CC lib/ftl/utils/ftl_conf.o 00:01:47.026 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:47.026 CC lib/ftl/utils/ftl_md.o 00:01:47.285 CC lib/ftl/utils/ftl_mempool.o 00:01:47.285 CC lib/ftl/utils/ftl_bitmap.o 00:01:47.285 CC lib/ftl/utils/ftl_property.o 00:01:47.285 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:47.285 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:47.285 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:47.285 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:47.285 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:47.285 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:47.285 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:47.285 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:47.285 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:47.285 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:47.285 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:47.285 CC lib/ftl/base/ftl_base_bdev.o 00:01:47.285 CC lib/ftl/base/ftl_base_dev.o 00:01:47.285 CC lib/ftl/ftl_trace.o 00:01:47.543 LIB libspdk_nbd.a 00:01:47.800 SO libspdk_nbd.so.7.0 00:01:47.800 SYMLINK libspdk_nbd.so 00:01:47.800 LIB libspdk_scsi.a 00:01:47.800 LIB libspdk_ublk.a 00:01:48.058 SO libspdk_scsi.so.9.0 00:01:48.058 SO libspdk_ublk.so.3.0 00:01:48.058 SYMLINK libspdk_ublk.so 00:01:48.058 SYMLINK libspdk_scsi.so 00:01:48.317 CC lib/iscsi/conn.o 00:01:48.318 CC lib/vhost/vhost.o 00:01:48.318 CC lib/iscsi/init_grp.o 00:01:48.318 CC lib/vhost/vhost_rpc.o 00:01:48.318 CC lib/iscsi/iscsi.o 00:01:48.318 CC lib/vhost/vhost_blk.o 00:01:48.318 CC lib/vhost/vhost_scsi.o 00:01:48.318 CC lib/iscsi/md5.o 00:01:48.318 CC lib/iscsi/param.o 00:01:48.318 CC lib/iscsi/portal_grp.o 00:01:48.318 CC lib/vhost/rte_vhost_user.o 00:01:48.318 CC lib/iscsi/tgt_node.o 00:01:48.318 CC lib/iscsi/iscsi_subsystem.o 00:01:48.318 CC lib/iscsi/iscsi_rpc.o 00:01:48.318 CC lib/iscsi/task.o 00:01:48.616 LIB libspdk_ftl.a 00:01:48.616 SO libspdk_ftl.so.9.0 00:01:48.874 SYMLINK libspdk_ftl.so 00:01:49.450 LIB libspdk_nvmf.a 00:01:49.450 SO libspdk_nvmf.so.19.0 00:01:49.450 LIB libspdk_vhost.a 00:01:49.450 SO libspdk_vhost.so.8.0 00:01:49.450 LIB libspdk_blob.a 00:01:49.713 SYMLINK libspdk_vhost.so 00:01:49.713 SO libspdk_blob.so.11.0 00:01:49.713 SYMLINK libspdk_blob.so 00:01:49.992 SYMLINK libspdk_nvmf.so 00:01:49.992 LIB libspdk_iscsi.a 00:01:49.992 SO libspdk_iscsi.so.8.0 00:01:49.992 CC lib/blobfs/blobfs.o 00:01:49.992 CC lib/blobfs/tree.o 00:01:49.992 CC lib/lvol/lvol.o 00:01:50.250 SYMLINK libspdk_iscsi.so 00:01:50.817 LIB libspdk_blobfs.a 00:01:50.817 SO libspdk_blobfs.so.10.0 00:01:51.076 LIB libspdk_lvol.a 00:01:51.076 SYMLINK libspdk_blobfs.so 00:01:51.076 SO libspdk_lvol.so.10.0 00:01:51.076 SYMLINK libspdk_lvol.so 00:01:51.644 CC module/env_dpdk/env_dpdk_rpc.o 00:01:51.644 CC module/vfu_device/vfu_virtio.o 00:01:51.644 CC module/vfu_device/vfu_virtio_blk.o 00:01:51.644 CC module/vfu_device/vfu_virtio_scsi.o 00:01:51.644 CC module/vfu_device/vfu_virtio_rpc.o 00:01:51.644 CC module/keyring/linux/keyring.o 00:01:51.644 CC module/keyring/linux/keyring_rpc.o 00:01:51.644 LIB libspdk_env_dpdk_rpc.a 00:01:51.644 CC module/blob/bdev/blob_bdev.o 00:01:51.644 CC module/accel/dsa/accel_dsa_rpc.o 00:01:51.644 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:51.644 CC module/accel/dsa/accel_dsa.o 00:01:51.644 CC module/scheduler/gscheduler/gscheduler.o 00:01:51.644 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:51.644 CC module/sock/posix/posix.o 00:01:51.644 CC module/accel/iaa/accel_iaa.o 00:01:51.644 CC module/accel/error/accel_error.o 00:01:51.644 CC module/accel/iaa/accel_iaa_rpc.o 00:01:51.644 CC module/keyring/file/keyring.o 00:01:51.644 CC module/accel/error/accel_error_rpc.o 00:01:51.644 CC module/keyring/file/keyring_rpc.o 00:01:51.644 CC module/accel/ioat/accel_ioat.o 00:01:51.644 CC module/accel/ioat/accel_ioat_rpc.o 00:01:51.644 SO libspdk_env_dpdk_rpc.so.6.0 00:01:51.902 SYMLINK libspdk_env_dpdk_rpc.so 00:01:51.902 LIB libspdk_keyring_linux.a 00:01:51.902 LIB libspdk_scheduler_gscheduler.a 00:01:51.902 SO libspdk_keyring_linux.so.1.0 00:01:51.902 LIB libspdk_scheduler_dpdk_governor.a 00:01:51.902 LIB libspdk_keyring_file.a 00:01:51.902 LIB libspdk_scheduler_dynamic.a 00:01:51.902 SO libspdk_scheduler_gscheduler.so.4.0 00:01:51.902 LIB libspdk_accel_error.a 00:01:51.902 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:51.902 LIB libspdk_accel_ioat.a 00:01:51.902 SO libspdk_keyring_file.so.1.0 00:01:51.902 LIB libspdk_accel_iaa.a 00:01:51.902 SO libspdk_scheduler_dynamic.so.4.0 00:01:51.902 SO libspdk_accel_error.so.2.0 00:01:51.902 SYMLINK libspdk_keyring_linux.so 00:01:51.902 SO libspdk_accel_ioat.so.6.0 00:01:51.902 SO libspdk_accel_iaa.so.3.0 00:01:52.161 SYMLINK libspdk_scheduler_gscheduler.so 00:01:52.161 LIB libspdk_blob_bdev.a 00:01:52.161 LIB libspdk_accel_dsa.a 00:01:52.161 SYMLINK libspdk_scheduler_dynamic.so 00:01:52.161 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:52.161 SYMLINK libspdk_keyring_file.so 00:01:52.161 SO libspdk_blob_bdev.so.11.0 00:01:52.161 SO libspdk_accel_dsa.so.5.0 00:01:52.161 SYMLINK libspdk_accel_ioat.so 00:01:52.161 SYMLINK libspdk_accel_error.so 00:01:52.161 SYMLINK libspdk_accel_iaa.so 00:01:52.161 SYMLINK libspdk_blob_bdev.so 00:01:52.161 SYMLINK libspdk_accel_dsa.so 00:01:52.162 LIB libspdk_vfu_device.a 00:01:52.162 SO libspdk_vfu_device.so.3.0 00:01:52.419 SYMLINK libspdk_vfu_device.so 00:01:52.419 LIB libspdk_sock_posix.a 00:01:52.419 SO libspdk_sock_posix.so.6.0 00:01:52.676 CC module/bdev/raid/bdev_raid.o 00:01:52.676 CC module/bdev/raid/bdev_raid_rpc.o 00:01:52.676 CC module/bdev/raid/raid0.o 00:01:52.676 CC module/bdev/raid/bdev_raid_sb.o 00:01:52.676 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:52.676 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:52.676 CC module/bdev/raid/concat.o 00:01:52.676 CC module/bdev/raid/raid1.o 00:01:52.676 CC module/bdev/null/bdev_null_rpc.o 00:01:52.676 CC module/bdev/null/bdev_null.o 00:01:52.676 CC module/bdev/delay/vbdev_delay.o 00:01:52.676 CC module/bdev/error/vbdev_error.o 00:01:52.676 CC module/bdev/error/vbdev_error_rpc.o 00:01:52.676 CC module/bdev/lvol/vbdev_lvol.o 00:01:52.676 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:52.676 CC module/blobfs/bdev/blobfs_bdev.o 00:01:52.676 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:52.676 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:52.676 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:52.676 CC module/bdev/nvme/bdev_mdns_client.o 00:01:52.676 CC module/bdev/nvme/nvme_rpc.o 00:01:52.676 CC module/bdev/nvme/bdev_nvme.o 00:01:52.676 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:52.676 CC module/bdev/nvme/vbdev_opal.o 00:01:52.676 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:52.676 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:52.676 CC module/bdev/ftl/bdev_ftl.o 00:01:52.676 CC module/bdev/iscsi/bdev_iscsi.o 00:01:52.676 SYMLINK libspdk_sock_posix.so 00:01:52.676 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:52.676 CC module/bdev/aio/bdev_aio.o 00:01:52.676 CC module/bdev/gpt/gpt.o 00:01:52.676 CC module/bdev/passthru/vbdev_passthru.o 00:01:52.676 CC module/bdev/aio/bdev_aio_rpc.o 00:01:52.676 CC module/bdev/gpt/vbdev_gpt.o 00:01:52.676 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:52.677 CC module/bdev/malloc/bdev_malloc.o 00:01:52.677 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:52.677 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:52.677 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:52.677 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:52.677 CC module/bdev/split/vbdev_split.o 00:01:52.677 CC module/bdev/split/vbdev_split_rpc.o 00:01:52.935 LIB libspdk_blobfs_bdev.a 00:01:52.935 SO libspdk_blobfs_bdev.so.6.0 00:01:52.935 LIB libspdk_bdev_null.a 00:01:52.935 LIB libspdk_bdev_gpt.a 00:01:52.935 LIB libspdk_bdev_error.a 00:01:52.935 LIB libspdk_bdev_passthru.a 00:01:52.935 SO libspdk_bdev_null.so.6.0 00:01:52.935 LIB libspdk_bdev_ftl.a 00:01:52.935 LIB libspdk_bdev_zone_block.a 00:01:52.935 SO libspdk_bdev_gpt.so.6.0 00:01:52.935 SO libspdk_bdev_error.so.6.0 00:01:52.935 SYMLINK libspdk_blobfs_bdev.so 00:01:53.193 SO libspdk_bdev_passthru.so.6.0 00:01:53.193 SO libspdk_bdev_ftl.so.6.0 00:01:53.193 SO libspdk_bdev_zone_block.so.6.0 00:01:53.193 LIB libspdk_bdev_aio.a 00:01:53.193 LIB libspdk_bdev_malloc.a 00:01:53.193 SYMLINK libspdk_bdev_null.so 00:01:53.193 LIB libspdk_bdev_iscsi.a 00:01:53.193 LIB libspdk_bdev_delay.a 00:01:53.193 SYMLINK libspdk_bdev_error.so 00:01:53.193 SYMLINK libspdk_bdev_gpt.so 00:01:53.193 SO libspdk_bdev_aio.so.6.0 00:01:53.193 SO libspdk_bdev_malloc.so.6.0 00:01:53.193 SYMLINK libspdk_bdev_passthru.so 00:01:53.193 SO libspdk_bdev_iscsi.so.6.0 00:01:53.193 SO libspdk_bdev_delay.so.6.0 00:01:53.193 LIB libspdk_bdev_split.a 00:01:53.193 SYMLINK libspdk_bdev_ftl.so 00:01:53.193 SYMLINK libspdk_bdev_zone_block.so 00:01:53.193 SYMLINK libspdk_bdev_malloc.so 00:01:53.193 SO libspdk_bdev_split.so.6.0 00:01:53.193 SYMLINK libspdk_bdev_aio.so 00:01:53.193 SYMLINK libspdk_bdev_delay.so 00:01:53.193 SYMLINK libspdk_bdev_iscsi.so 00:01:53.193 LIB libspdk_bdev_lvol.a 00:01:53.193 LIB libspdk_bdev_virtio.a 00:01:53.193 SYMLINK libspdk_bdev_split.so 00:01:53.193 SO libspdk_bdev_lvol.so.6.0 00:01:53.193 SO libspdk_bdev_virtio.so.6.0 00:01:53.452 SYMLINK libspdk_bdev_lvol.so 00:01:53.452 SYMLINK libspdk_bdev_virtio.so 00:01:53.710 LIB libspdk_bdev_raid.a 00:01:53.710 SO libspdk_bdev_raid.so.6.0 00:01:53.710 SYMLINK libspdk_bdev_raid.so 00:01:55.132 LIB libspdk_bdev_nvme.a 00:01:55.132 SO libspdk_bdev_nvme.so.7.0 00:01:55.132 SYMLINK libspdk_bdev_nvme.so 00:01:55.701 CC module/event/subsystems/iobuf/iobuf.o 00:01:55.701 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:55.701 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:55.701 CC module/event/subsystems/keyring/keyring.o 00:01:55.701 CC module/event/subsystems/vmd/vmd.o 00:01:55.701 CC module/event/subsystems/sock/sock.o 00:01:55.701 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:55.701 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:55.701 CC module/event/subsystems/scheduler/scheduler.o 00:01:55.959 LIB libspdk_event_vhost_blk.a 00:01:55.960 LIB libspdk_event_keyring.a 00:01:55.960 LIB libspdk_event_vmd.a 00:01:55.960 LIB libspdk_event_vfu_tgt.a 00:01:55.960 LIB libspdk_event_scheduler.a 00:01:55.960 LIB libspdk_event_sock.a 00:01:55.960 SO libspdk_event_vhost_blk.so.3.0 00:01:55.960 SO libspdk_event_keyring.so.1.0 00:01:55.960 SO libspdk_event_scheduler.so.4.0 00:01:55.960 SO libspdk_event_vfu_tgt.so.3.0 00:01:55.960 SO libspdk_event_vmd.so.6.0 00:01:55.960 SO libspdk_event_sock.so.5.0 00:01:55.960 SYMLINK libspdk_event_vhost_blk.so 00:01:55.960 SYMLINK libspdk_event_keyring.so 00:01:55.960 SYMLINK libspdk_event_sock.so 00:01:55.960 SYMLINK libspdk_event_scheduler.so 00:01:55.960 SYMLINK libspdk_event_vfu_tgt.so 00:01:55.960 SYMLINK libspdk_event_vmd.so 00:01:55.960 LIB libspdk_event_iobuf.a 00:01:56.220 SO libspdk_event_iobuf.so.3.0 00:01:56.220 SYMLINK libspdk_event_iobuf.so 00:01:56.478 CC module/event/subsystems/accel/accel.o 00:01:56.737 LIB libspdk_event_accel.a 00:01:56.737 SO libspdk_event_accel.so.6.0 00:01:56.737 SYMLINK libspdk_event_accel.so 00:01:56.996 CC module/event/subsystems/bdev/bdev.o 00:01:57.255 LIB libspdk_event_bdev.a 00:01:57.255 SO libspdk_event_bdev.so.6.0 00:01:57.515 SYMLINK libspdk_event_bdev.so 00:01:57.773 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:57.773 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:57.773 CC module/event/subsystems/scsi/scsi.o 00:01:57.773 CC module/event/subsystems/ublk/ublk.o 00:01:57.773 CC module/event/subsystems/nbd/nbd.o 00:01:57.773 LIB libspdk_event_ublk.a 00:01:57.773 LIB libspdk_event_nbd.a 00:01:57.773 LIB libspdk_event_scsi.a 00:01:58.033 SO libspdk_event_ublk.so.3.0 00:01:58.033 SO libspdk_event_nbd.so.6.0 00:01:58.033 SO libspdk_event_scsi.so.6.0 00:01:58.033 LIB libspdk_event_nvmf.a 00:01:58.033 SYMLINK libspdk_event_ublk.so 00:01:58.033 SYMLINK libspdk_event_nbd.so 00:01:58.033 SO libspdk_event_nvmf.so.6.0 00:01:58.033 SYMLINK libspdk_event_scsi.so 00:01:58.033 SYMLINK libspdk_event_nvmf.so 00:01:58.292 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:58.292 CC module/event/subsystems/iscsi/iscsi.o 00:01:58.552 LIB libspdk_event_vhost_scsi.a 00:01:58.552 LIB libspdk_event_iscsi.a 00:01:58.552 SO libspdk_event_vhost_scsi.so.3.0 00:01:58.552 SO libspdk_event_iscsi.so.6.0 00:01:58.552 SYMLINK libspdk_event_vhost_scsi.so 00:01:58.552 SYMLINK libspdk_event_iscsi.so 00:01:58.811 SO libspdk.so.6.0 00:01:58.811 SYMLINK libspdk.so 00:01:59.069 CXX app/trace/trace.o 00:01:59.069 CC app/spdk_nvme_perf/perf.o 00:01:59.069 CC app/trace_record/trace_record.o 00:01:59.069 CC app/spdk_nvme_identify/identify.o 00:01:59.069 TEST_HEADER include/spdk/accel.h 00:01:59.069 TEST_HEADER include/spdk/accel_module.h 00:01:59.069 TEST_HEADER include/spdk/assert.h 00:01:59.069 TEST_HEADER include/spdk/barrier.h 00:01:59.069 TEST_HEADER include/spdk/base64.h 00:01:59.069 TEST_HEADER include/spdk/bdev.h 00:01:59.069 CC app/spdk_nvme_discover/discovery_aer.o 00:01:59.069 TEST_HEADER include/spdk/bdev_zone.h 00:01:59.069 TEST_HEADER include/spdk/bdev_module.h 00:01:59.070 CC app/spdk_lspci/spdk_lspci.o 00:01:59.070 TEST_HEADER include/spdk/bit_array.h 00:01:59.070 CC test/rpc_client/rpc_client_test.o 00:01:59.070 TEST_HEADER include/spdk/blob_bdev.h 00:01:59.070 TEST_HEADER include/spdk/bit_pool.h 00:01:59.070 CC app/spdk_top/spdk_top.o 00:01:59.070 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:59.070 TEST_HEADER include/spdk/blobfs.h 00:01:59.070 TEST_HEADER include/spdk/blob.h 00:01:59.070 TEST_HEADER include/spdk/conf.h 00:01:59.070 TEST_HEADER include/spdk/config.h 00:01:59.070 TEST_HEADER include/spdk/cpuset.h 00:01:59.070 TEST_HEADER include/spdk/crc16.h 00:01:59.070 TEST_HEADER include/spdk/crc64.h 00:01:59.070 TEST_HEADER include/spdk/crc32.h 00:01:59.070 TEST_HEADER include/spdk/dif.h 00:01:59.070 TEST_HEADER include/spdk/env_dpdk.h 00:01:59.070 TEST_HEADER include/spdk/dma.h 00:01:59.070 TEST_HEADER include/spdk/endian.h 00:01:59.070 TEST_HEADER include/spdk/env.h 00:01:59.070 TEST_HEADER include/spdk/fd_group.h 00:01:59.070 TEST_HEADER include/spdk/event.h 00:01:59.070 TEST_HEADER include/spdk/fd.h 00:01:59.070 TEST_HEADER include/spdk/ftl.h 00:01:59.070 TEST_HEADER include/spdk/file.h 00:01:59.070 TEST_HEADER include/spdk/gpt_spec.h 00:01:59.070 TEST_HEADER include/spdk/hexlify.h 00:01:59.070 TEST_HEADER include/spdk/idxd.h 00:01:59.070 TEST_HEADER include/spdk/histogram_data.h 00:01:59.070 TEST_HEADER include/spdk/init.h 00:01:59.070 TEST_HEADER include/spdk/ioat.h 00:01:59.070 TEST_HEADER include/spdk/idxd_spec.h 00:01:59.070 TEST_HEADER include/spdk/iscsi_spec.h 00:01:59.070 TEST_HEADER include/spdk/ioat_spec.h 00:01:59.070 TEST_HEADER include/spdk/json.h 00:01:59.070 TEST_HEADER include/spdk/jsonrpc.h 00:01:59.070 TEST_HEADER include/spdk/keyring.h 00:01:59.070 TEST_HEADER include/spdk/keyring_module.h 00:01:59.070 TEST_HEADER include/spdk/log.h 00:01:59.070 TEST_HEADER include/spdk/likely.h 00:01:59.070 TEST_HEADER include/spdk/memory.h 00:01:59.070 TEST_HEADER include/spdk/lvol.h 00:01:59.070 TEST_HEADER include/spdk/nbd.h 00:01:59.070 TEST_HEADER include/spdk/net.h 00:01:59.070 TEST_HEADER include/spdk/mmio.h 00:01:59.070 TEST_HEADER include/spdk/notify.h 00:01:59.070 TEST_HEADER include/spdk/nvme.h 00:01:59.070 TEST_HEADER include/spdk/nvme_intel.h 00:01:59.070 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:59.070 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:59.070 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:59.070 TEST_HEADER include/spdk/nvme_spec.h 00:01:59.070 CC app/spdk_dd/spdk_dd.o 00:01:59.070 TEST_HEADER include/spdk/nvme_zns.h 00:01:59.070 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:59.070 TEST_HEADER include/spdk/nvmf_spec.h 00:01:59.070 TEST_HEADER include/spdk/nvmf.h 00:01:59.070 TEST_HEADER include/spdk/nvmf_transport.h 00:01:59.070 TEST_HEADER include/spdk/opal_spec.h 00:01:59.070 TEST_HEADER include/spdk/opal.h 00:01:59.070 CC app/iscsi_tgt/iscsi_tgt.o 00:01:59.070 TEST_HEADER include/spdk/pci_ids.h 00:01:59.070 TEST_HEADER include/spdk/pipe.h 00:01:59.070 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:59.070 TEST_HEADER include/spdk/queue.h 00:01:59.070 TEST_HEADER include/spdk/reduce.h 00:01:59.070 TEST_HEADER include/spdk/rpc.h 00:01:59.070 TEST_HEADER include/spdk/scsi.h 00:01:59.070 TEST_HEADER include/spdk/scsi_spec.h 00:01:59.346 TEST_HEADER include/spdk/scheduler.h 00:01:59.346 TEST_HEADER include/spdk/sock.h 00:01:59.346 CC app/nvmf_tgt/nvmf_main.o 00:01:59.346 TEST_HEADER include/spdk/stdinc.h 00:01:59.346 TEST_HEADER include/spdk/string.h 00:01:59.346 TEST_HEADER include/spdk/thread.h 00:01:59.346 TEST_HEADER include/spdk/tree.h 00:01:59.346 TEST_HEADER include/spdk/trace.h 00:01:59.346 TEST_HEADER include/spdk/ublk.h 00:01:59.346 TEST_HEADER include/spdk/trace_parser.h 00:01:59.346 TEST_HEADER include/spdk/util.h 00:01:59.346 TEST_HEADER include/spdk/uuid.h 00:01:59.346 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:59.346 TEST_HEADER include/spdk/version.h 00:01:59.346 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:59.346 TEST_HEADER include/spdk/vhost.h 00:01:59.346 TEST_HEADER include/spdk/vmd.h 00:01:59.346 TEST_HEADER include/spdk/zipf.h 00:01:59.346 TEST_HEADER include/spdk/xor.h 00:01:59.346 CXX test/cpp_headers/accel.o 00:01:59.346 CXX test/cpp_headers/accel_module.o 00:01:59.346 CXX test/cpp_headers/assert.o 00:01:59.346 CXX test/cpp_headers/barrier.o 00:01:59.346 CXX test/cpp_headers/bdev.o 00:01:59.346 CXX test/cpp_headers/base64.o 00:01:59.346 CXX test/cpp_headers/bdev_module.o 00:01:59.346 CXX test/cpp_headers/bit_array.o 00:01:59.346 CXX test/cpp_headers/bdev_zone.o 00:01:59.346 CXX test/cpp_headers/bit_pool.o 00:01:59.346 CXX test/cpp_headers/blob_bdev.o 00:01:59.346 CXX test/cpp_headers/blob.o 00:01:59.346 CXX test/cpp_headers/blobfs.o 00:01:59.346 CXX test/cpp_headers/blobfs_bdev.o 00:01:59.346 CXX test/cpp_headers/conf.o 00:01:59.346 CXX test/cpp_headers/cpuset.o 00:01:59.346 CXX test/cpp_headers/config.o 00:01:59.346 CXX test/cpp_headers/crc32.o 00:01:59.346 CXX test/cpp_headers/crc16.o 00:01:59.346 CXX test/cpp_headers/dma.o 00:01:59.346 CXX test/cpp_headers/crc64.o 00:01:59.346 CC app/spdk_tgt/spdk_tgt.o 00:01:59.346 CXX test/cpp_headers/dif.o 00:01:59.346 CXX test/cpp_headers/env_dpdk.o 00:01:59.346 CXX test/cpp_headers/endian.o 00:01:59.346 CXX test/cpp_headers/event.o 00:01:59.346 CXX test/cpp_headers/fd_group.o 00:01:59.346 CXX test/cpp_headers/env.o 00:01:59.346 CXX test/cpp_headers/fd.o 00:01:59.346 CXX test/cpp_headers/ftl.o 00:01:59.346 CXX test/cpp_headers/hexlify.o 00:01:59.346 CXX test/cpp_headers/file.o 00:01:59.346 CXX test/cpp_headers/gpt_spec.o 00:01:59.346 CXX test/cpp_headers/idxd.o 00:01:59.346 CXX test/cpp_headers/init.o 00:01:59.346 CXX test/cpp_headers/idxd_spec.o 00:01:59.346 CXX test/cpp_headers/histogram_data.o 00:01:59.346 CXX test/cpp_headers/ioat.o 00:01:59.346 CXX test/cpp_headers/iscsi_spec.o 00:01:59.346 CXX test/cpp_headers/jsonrpc.o 00:01:59.346 CXX test/cpp_headers/json.o 00:01:59.346 CXX test/cpp_headers/ioat_spec.o 00:01:59.346 CXX test/cpp_headers/keyring_module.o 00:01:59.346 CXX test/cpp_headers/keyring.o 00:01:59.346 CXX test/cpp_headers/lvol.o 00:01:59.346 CXX test/cpp_headers/likely.o 00:01:59.346 CXX test/cpp_headers/log.o 00:01:59.346 CXX test/cpp_headers/memory.o 00:01:59.346 CXX test/cpp_headers/mmio.o 00:01:59.346 CXX test/cpp_headers/nbd.o 00:01:59.346 CXX test/cpp_headers/notify.o 00:01:59.346 CXX test/cpp_headers/net.o 00:01:59.346 CXX test/cpp_headers/nvme_ocssd.o 00:01:59.346 CXX test/cpp_headers/nvme_intel.o 00:01:59.346 CXX test/cpp_headers/nvme.o 00:01:59.346 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:59.346 CXX test/cpp_headers/nvme_spec.o 00:01:59.346 CXX test/cpp_headers/nvme_zns.o 00:01:59.346 CXX test/cpp_headers/nvmf_cmd.o 00:01:59.346 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:59.346 CXX test/cpp_headers/nvmf_spec.o 00:01:59.346 CXX test/cpp_headers/nvmf.o 00:01:59.346 CXX test/cpp_headers/opal.o 00:01:59.346 CXX test/cpp_headers/nvmf_transport.o 00:01:59.346 CXX test/cpp_headers/opal_spec.o 00:01:59.346 CXX test/cpp_headers/pci_ids.o 00:01:59.346 CXX test/cpp_headers/pipe.o 00:01:59.346 CXX test/cpp_headers/queue.o 00:01:59.346 CXX test/cpp_headers/reduce.o 00:01:59.346 CXX test/cpp_headers/rpc.o 00:01:59.346 CXX test/cpp_headers/scheduler.o 00:01:59.346 CXX test/cpp_headers/scsi.o 00:01:59.346 CXX test/cpp_headers/scsi_spec.o 00:01:59.346 CXX test/cpp_headers/sock.o 00:01:59.346 CXX test/cpp_headers/stdinc.o 00:01:59.346 CXX test/cpp_headers/string.o 00:01:59.346 CXX test/cpp_headers/thread.o 00:01:59.346 CXX test/cpp_headers/trace.o 00:01:59.346 CXX test/cpp_headers/tree.o 00:01:59.346 CXX test/cpp_headers/trace_parser.o 00:01:59.346 CXX test/cpp_headers/ublk.o 00:01:59.346 CXX test/cpp_headers/util.o 00:01:59.346 CXX test/cpp_headers/uuid.o 00:01:59.346 CXX test/cpp_headers/version.o 00:01:59.346 CC test/app/jsoncat/jsoncat.o 00:01:59.346 CC test/app/stub/stub.o 00:01:59.346 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:59.346 CC examples/ioat/verify/verify.o 00:01:59.346 CC test/env/vtophys/vtophys.o 00:01:59.346 CC test/thread/poller_perf/poller_perf.o 00:01:59.346 CC test/app/histogram_perf/histogram_perf.o 00:01:59.346 CC examples/util/zipf/zipf.o 00:01:59.346 CC test/env/pci/pci_ut.o 00:01:59.346 CC test/env/memory/memory_ut.o 00:01:59.346 CC examples/ioat/perf/perf.o 00:01:59.346 CC app/fio/nvme/fio_plugin.o 00:01:59.634 CXX test/cpp_headers/vfio_user_pci.o 00:01:59.634 CXX test/cpp_headers/vfio_user_spec.o 00:01:59.634 CC test/app/bdev_svc/bdev_svc.o 00:01:59.634 CC app/fio/bdev/fio_plugin.o 00:01:59.634 CC test/dma/test_dma/test_dma.o 00:01:59.908 LINK spdk_lspci 00:01:59.908 LINK rpc_client_test 00:02:00.167 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:00.167 LINK spdk_trace_record 00:02:00.167 CXX test/cpp_headers/vhost.o 00:02:00.167 CXX test/cpp_headers/vmd.o 00:02:00.167 CXX test/cpp_headers/xor.o 00:02:00.167 CXX test/cpp_headers/zipf.o 00:02:00.167 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:00.167 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:00.167 LINK spdk_nvme_discover 00:02:00.167 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:00.167 CC test/env/mem_callbacks/mem_callbacks.o 00:02:00.167 LINK nvmf_tgt 00:02:00.167 LINK env_dpdk_post_init 00:02:00.167 LINK zipf 00:02:00.167 LINK jsoncat 00:02:00.167 LINK iscsi_tgt 00:02:00.167 LINK histogram_perf 00:02:00.167 LINK spdk_tgt 00:02:00.167 LINK poller_perf 00:02:00.167 LINK stub 00:02:00.167 LINK interrupt_tgt 00:02:00.167 LINK vtophys 00:02:00.426 LINK bdev_svc 00:02:00.426 LINK verify 00:02:00.426 LINK ioat_perf 00:02:00.426 LINK spdk_dd 00:02:00.426 LINK pci_ut 00:02:00.426 LINK spdk_trace 00:02:00.426 LINK test_dma 00:02:00.685 LINK spdk_bdev 00:02:00.685 LINK spdk_nvme 00:02:00.685 LINK vhost_fuzz 00:02:00.685 LINK nvme_fuzz 00:02:00.685 CC examples/idxd/perf/perf.o 00:02:00.685 CC examples/vmd/led/led.o 00:02:00.685 CC examples/vmd/lsvmd/lsvmd.o 00:02:00.685 CC examples/sock/hello_world/hello_sock.o 00:02:00.685 CC test/event/reactor_perf/reactor_perf.o 00:02:00.685 CC test/event/reactor/reactor.o 00:02:00.685 CC test/event/event_perf/event_perf.o 00:02:00.685 CC test/event/app_repeat/app_repeat.o 00:02:00.685 CC test/event/scheduler/scheduler.o 00:02:00.685 LINK spdk_top 00:02:00.685 CC examples/thread/thread/thread_ex.o 00:02:00.944 LINK mem_callbacks 00:02:00.944 LINK spdk_nvme_identify 00:02:00.944 LINK spdk_nvme_perf 00:02:00.944 CC app/vhost/vhost.o 00:02:00.944 LINK led 00:02:00.944 LINK reactor_perf 00:02:00.944 LINK reactor 00:02:00.944 LINK app_repeat 00:02:00.944 LINK lsvmd 00:02:00.944 LINK event_perf 00:02:00.944 CC test/nvme/aer/aer.o 00:02:00.944 LINK scheduler 00:02:00.944 CC test/nvme/overhead/overhead.o 00:02:00.944 CC test/nvme/startup/startup.o 00:02:00.944 CC test/nvme/sgl/sgl.o 00:02:00.944 CC test/nvme/simple_copy/simple_copy.o 00:02:00.944 CC test/nvme/err_injection/err_injection.o 00:02:00.944 CC test/nvme/connect_stress/connect_stress.o 00:02:00.944 CC test/nvme/reset/reset.o 00:02:00.944 CC test/nvme/reserve/reserve.o 00:02:00.944 CC test/nvme/compliance/nvme_compliance.o 00:02:00.944 CC test/nvme/e2edp/nvme_dp.o 00:02:00.944 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:01.203 CC test/nvme/cuse/cuse.o 00:02:01.203 CC test/nvme/boot_partition/boot_partition.o 00:02:01.203 CC test/nvme/fused_ordering/fused_ordering.o 00:02:01.203 LINK thread 00:02:01.203 CC test/nvme/fdp/fdp.o 00:02:01.203 CC test/accel/dif/dif.o 00:02:01.203 CC test/blobfs/mkfs/mkfs.o 00:02:01.203 LINK idxd_perf 00:02:01.203 LINK vhost 00:02:01.203 CC test/lvol/esnap/esnap.o 00:02:01.203 LINK hello_sock 00:02:01.203 LINK boot_partition 00:02:01.203 LINK simple_copy 00:02:01.203 LINK memory_ut 00:02:01.203 LINK doorbell_aers 00:02:01.203 LINK startup 00:02:01.203 LINK connect_stress 00:02:01.203 LINK err_injection 00:02:01.461 LINK reserve 00:02:01.461 LINK fused_ordering 00:02:01.461 LINK aer 00:02:01.461 LINK nvme_dp 00:02:01.461 LINK reset 00:02:01.461 LINK fdp 00:02:01.461 LINK sgl 00:02:01.461 LINK mkfs 00:02:01.461 LINK overhead 00:02:01.461 LINK nvme_compliance 00:02:01.720 CC examples/accel/perf/accel_perf.o 00:02:01.720 LINK dif 00:02:01.720 CC examples/blob/hello_world/hello_blob.o 00:02:01.720 CC examples/blob/cli/blobcli.o 00:02:01.720 LINK iscsi_fuzz 00:02:01.978 CC examples/nvme/reconnect/reconnect.o 00:02:01.978 CC examples/nvme/hello_world/hello_world.o 00:02:01.978 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:01.978 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:01.978 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:01.978 CC examples/nvme/abort/abort.o 00:02:01.978 CC examples/nvme/arbitration/arbitration.o 00:02:01.978 CC examples/nvme/hotplug/hotplug.o 00:02:01.978 LINK hello_blob 00:02:01.978 LINK pmr_persistence 00:02:01.978 LINK cmb_copy 00:02:01.978 LINK accel_perf 00:02:02.238 LINK hotplug 00:02:02.238 LINK hello_world 00:02:02.238 CC test/bdev/bdevio/bdevio.o 00:02:02.238 LINK blobcli 00:02:02.238 LINK arbitration 00:02:02.238 LINK reconnect 00:02:02.238 LINK abort 00:02:02.497 LINK nvme_manage 00:02:02.497 LINK cuse 00:02:02.497 LINK bdevio 00:02:02.757 CC examples/bdev/hello_world/hello_bdev.o 00:02:02.757 CC examples/bdev/bdevperf/bdevperf.o 00:02:03.057 LINK hello_bdev 00:02:03.341 LINK bdevperf 00:02:03.909 CC examples/nvmf/nvmf/nvmf.o 00:02:04.478 LINK nvmf 00:02:06.384 LINK esnap 00:02:06.643 00:02:06.643 real 0m55.568s 00:02:06.643 user 8m32.694s 00:02:06.643 sys 4m16.416s 00:02:06.643 18:38:51 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:06.643 18:38:51 make -- common/autotest_common.sh@10 -- $ set +x 00:02:06.643 ************************************ 00:02:06.643 END TEST make 00:02:06.643 ************************************ 00:02:06.643 18:38:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:06.643 18:38:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:06.643 18:38:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:06.643 18:38:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.643 18:38:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:06.643 18:38:51 -- pm/common@44 -- $ pid=2182899 00:02:06.643 18:38:51 -- pm/common@50 -- $ kill -TERM 2182899 00:02:06.643 18:38:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.643 18:38:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:06.643 18:38:51 -- pm/common@44 -- $ pid=2182900 00:02:06.643 18:38:51 -- pm/common@50 -- $ kill -TERM 2182900 00:02:06.643 18:38:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.643 18:38:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:06.643 18:38:51 -- pm/common@44 -- $ pid=2182902 00:02:06.643 18:38:51 -- pm/common@50 -- $ kill -TERM 2182902 00:02:06.643 18:38:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.643 18:38:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:06.643 18:38:51 -- pm/common@44 -- $ pid=2182926 00:02:06.643 18:38:51 -- pm/common@50 -- $ sudo -E kill -TERM 2182926 00:02:06.903 18:38:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:06.903 18:38:51 -- nvmf/common.sh@7 -- # uname -s 00:02:06.903 18:38:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:06.903 18:38:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:06.903 18:38:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:06.903 18:38:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:06.903 18:38:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:06.903 18:38:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:06.903 18:38:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:06.903 18:38:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:06.903 18:38:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:06.903 18:38:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:06.903 18:38:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:02:06.903 18:38:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:02:06.903 18:38:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:06.903 18:38:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:06.903 18:38:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:06.903 18:38:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:06.903 18:38:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:06.903 18:38:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:06.903 18:38:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.903 18:38:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.903 18:38:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.903 18:38:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.903 18:38:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.903 18:38:51 -- paths/export.sh@5 -- # export PATH 00:02:06.903 18:38:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.903 18:38:51 -- nvmf/common.sh@47 -- # : 0 00:02:06.903 18:38:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:06.903 18:38:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:06.903 18:38:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:06.903 18:38:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:06.903 18:38:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:06.903 18:38:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:06.903 18:38:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:06.904 18:38:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:06.904 18:38:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:06.904 18:38:51 -- spdk/autotest.sh@32 -- # uname -s 00:02:06.904 18:38:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:06.904 18:38:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:06.904 18:38:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:06.904 18:38:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:06.904 18:38:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:06.904 18:38:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:06.904 18:38:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:06.904 18:38:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:06.904 18:38:51 -- spdk/autotest.sh@48 -- # udevadm_pid=2245232 00:02:06.904 18:38:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:06.904 18:38:51 -- pm/common@17 -- # local monitor 00:02:06.904 18:38:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:06.904 18:38:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.904 18:38:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.904 18:38:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.904 18:38:51 -- pm/common@21 -- # date +%s 00:02:06.904 18:38:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.904 18:38:51 -- pm/common@21 -- # date +%s 00:02:06.904 18:38:51 -- pm/common@25 -- # sleep 1 00:02:06.904 18:38:51 -- pm/common@21 -- # date +%s 00:02:06.904 18:38:51 -- pm/common@21 -- # date +%s 00:02:06.904 18:38:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721839131 00:02:06.904 18:38:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721839131 00:02:06.904 18:38:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721839131 00:02:06.904 18:38:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721839131 00:02:06.904 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721839131_collect-vmstat.pm.log 00:02:06.904 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721839131_collect-cpu-load.pm.log 00:02:06.904 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721839131_collect-cpu-temp.pm.log 00:02:06.904 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721839131_collect-bmc-pm.bmc.pm.log 00:02:07.843 18:38:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:07.843 18:38:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:07.843 18:38:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:07.843 18:38:52 -- common/autotest_common.sh@10 -- # set +x 00:02:07.843 18:38:52 -- spdk/autotest.sh@59 -- # create_test_list 00:02:07.843 18:38:52 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:07.843 18:38:52 -- common/autotest_common.sh@10 -- # set +x 00:02:07.843 18:38:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:07.843 18:38:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.843 18:38:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.843 18:38:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:07.843 18:38:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.843 18:38:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:07.843 18:38:52 -- common/autotest_common.sh@1453 -- # uname 00:02:07.843 18:38:52 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:02:07.843 18:38:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:07.843 18:38:52 -- common/autotest_common.sh@1473 -- # uname 00:02:07.843 18:38:52 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:02:07.843 18:38:52 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:07.843 18:38:52 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:07.843 18:38:52 -- spdk/autotest.sh@72 -- # hash lcov 00:02:07.843 18:38:52 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:07.843 18:38:52 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:07.843 --rc lcov_branch_coverage=1 00:02:07.843 --rc lcov_function_coverage=1 00:02:07.843 --rc genhtml_branch_coverage=1 00:02:07.843 --rc genhtml_function_coverage=1 00:02:07.843 --rc genhtml_legend=1 00:02:07.843 --rc geninfo_all_blocks=1 00:02:07.843 ' 00:02:07.843 18:38:52 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:07.843 --rc lcov_branch_coverage=1 00:02:07.843 --rc lcov_function_coverage=1 00:02:07.843 --rc genhtml_branch_coverage=1 00:02:07.843 --rc genhtml_function_coverage=1 00:02:07.843 --rc genhtml_legend=1 00:02:07.843 --rc geninfo_all_blocks=1 00:02:07.843 ' 00:02:07.843 18:38:52 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:07.843 --rc lcov_branch_coverage=1 00:02:07.843 --rc lcov_function_coverage=1 00:02:07.843 --rc genhtml_branch_coverage=1 00:02:07.843 --rc genhtml_function_coverage=1 00:02:07.843 --rc genhtml_legend=1 00:02:07.843 --rc geninfo_all_blocks=1 00:02:07.843 --no-external' 00:02:07.843 18:38:52 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:07.843 --rc lcov_branch_coverage=1 00:02:07.843 --rc lcov_function_coverage=1 00:02:07.843 --rc genhtml_branch_coverage=1 00:02:07.843 --rc genhtml_function_coverage=1 00:02:07.843 --rc genhtml_legend=1 00:02:07.843 --rc geninfo_all_blocks=1 00:02:07.843 --no-external' 00:02:07.843 18:38:52 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:08.103 lcov: LCOV version 1.14 00:02:08.103 18:38:52 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:22.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:22.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:37.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:37.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:37.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:37.880 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:37.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:37.880 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:37.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:37.880 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:43.163 18:39:27 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:43.163 18:39:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:43.163 18:39:27 -- common/autotest_common.sh@10 -- # set +x 00:02:43.163 18:39:27 -- spdk/autotest.sh@91 -- # rm -f 00:02:43.163 18:39:27 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.700 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:02:45.700 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:45.700 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:45.959 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:45.959 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:45.959 18:39:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:45.959 18:39:30 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:02:45.959 18:39:30 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:02:45.959 18:39:30 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:02:45.959 18:39:30 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:02:45.959 18:39:30 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:45.959 18:39:30 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:02:45.959 18:39:30 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:45.959 18:39:30 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:02:45.959 18:39:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:45.959 18:39:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:45.959 18:39:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:45.959 18:39:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:45.959 18:39:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:45.959 18:39:30 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:45.959 No valid GPT data, bailing 00:02:45.959 18:39:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:45.959 18:39:30 -- scripts/common.sh@391 -- # pt= 00:02:45.959 18:39:30 -- scripts/common.sh@392 -- # return 1 00:02:45.959 18:39:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:45.959 1+0 records in 00:02:45.959 1+0 records out 00:02:45.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00241693 s, 434 MB/s 00:02:45.959 18:39:30 -- spdk/autotest.sh@118 -- # sync 00:02:45.959 18:39:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:45.959 18:39:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:45.959 18:39:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:52.531 18:39:36 -- spdk/autotest.sh@124 -- # uname -s 00:02:52.531 18:39:36 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:52.531 18:39:36 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:52.531 18:39:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.531 18:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.531 18:39:36 -- common/autotest_common.sh@10 -- # set +x 00:02:52.531 ************************************ 00:02:52.531 START TEST setup.sh 00:02:52.531 ************************************ 00:02:52.531 18:39:36 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:52.531 * Looking for test storage... 00:02:52.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:52.531 18:39:36 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:52.531 18:39:36 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:52.531 18:39:36 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:52.531 18:39:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.531 18:39:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.531 18:39:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:52.531 ************************************ 00:02:52.531 START TEST acl 00:02:52.531 ************************************ 00:02:52.531 18:39:37 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:52.531 * Looking for test storage... 00:02:52.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:52.531 18:39:37 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:52.531 18:39:37 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:02:52.531 18:39:37 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:02:52.531 18:39:37 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:02:52.531 18:39:37 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:02:52.531 18:39:37 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:52.531 18:39:37 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:02:52.531 18:39:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:52.531 18:39:37 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:02:52.531 18:39:37 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:52.531 18:39:37 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:52.531 18:39:37 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:52.531 18:39:37 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:52.531 18:39:37 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:52.531 18:39:37 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.531 18:39:37 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.821 18:39:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:55.821 18:39:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:55.821 18:39:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.821 18:39:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:55.821 18:39:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.821 18:39:40 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:58.356 Hugepages 00:02:58.356 node hugesize free / total 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 00:02:58.356 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.356 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:58.357 18:39:43 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:58.357 18:39:43 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.357 18:39:43 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.357 18:39:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.357 ************************************ 00:02:58.357 START TEST denied 00:02:58.357 ************************************ 00:02:58.357 18:39:43 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:58.357 18:39:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:02:58.357 18:39:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:58.357 18:39:43 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:02:58.357 18:39:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.357 18:39:43 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.709 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:03:01.709 18:39:46 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:03:01.709 18:39:46 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:01.709 18:39:46 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:01.709 18:39:46 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:03:01.709 18:39:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:03:01.709 18:39:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:01.709 18:39:46 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:01.709 18:39:46 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:01.709 18:39:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.709 18:39:46 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.900 00:03:05.900 real 0m7.118s 00:03:05.900 user 0m2.294s 00:03:05.900 sys 0m4.080s 00:03:05.900 18:39:50 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.900 18:39:50 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:05.900 ************************************ 00:03:05.900 END TEST denied 00:03:05.900 ************************************ 00:03:05.900 18:39:50 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:05.900 18:39:50 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.900 18:39:50 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.900 18:39:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:05.900 ************************************ 00:03:05.900 START TEST allowed 00:03:05.900 ************************************ 00:03:05.900 18:39:50 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:05.900 18:39:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:03:05.900 18:39:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:05.900 18:39:50 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:03:05.900 18:39:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.900 18:39:50 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.093 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:10.093 18:39:54 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:10.093 18:39:54 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:10.093 18:39:54 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:10.093 18:39:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.093 18:39:54 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.403 00:03:13.403 real 0m7.282s 00:03:13.403 user 0m2.332s 00:03:13.403 sys 0m4.057s 00:03:13.403 18:39:57 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.403 18:39:57 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:13.403 ************************************ 00:03:13.403 END TEST allowed 00:03:13.403 ************************************ 00:03:13.403 00:03:13.403 real 0m20.858s 00:03:13.403 user 0m7.094s 00:03:13.403 sys 0m12.351s 00:03:13.403 18:39:57 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.403 18:39:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:13.403 ************************************ 00:03:13.403 END TEST acl 00:03:13.403 ************************************ 00:03:13.403 18:39:57 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:13.403 18:39:57 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.403 18:39:57 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.403 18:39:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:13.403 ************************************ 00:03:13.403 START TEST hugepages 00:03:13.403 ************************************ 00:03:13.403 18:39:57 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:13.403 * Looking for test storage... 00:03:13.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69084196 kB' 'MemAvailable: 72951724 kB' 'Buffers: 2704 kB' 'Cached: 14775672 kB' 'SwapCached: 0 kB' 'Active: 11662992 kB' 'Inactive: 3702268 kB' 'Active(anon): 11209264 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590340 kB' 'Mapped: 212772 kB' 'Shmem: 10622380 kB' 'KReclaimable: 547492 kB' 'Slab: 1234256 kB' 'SReclaimable: 547492 kB' 'SUnreclaim: 686764 kB' 'KernelStack: 22784 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434752 kB' 'Committed_AS: 12710428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220152 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.403 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.404 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:13.405 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:13.406 18:39:58 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:13.406 18:39:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.406 18:39:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.406 18:39:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.406 ************************************ 00:03:13.406 START TEST default_setup 00:03:13.406 ************************************ 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.406 18:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.943 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:15.943 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:16.202 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:17.143 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71217516 kB' 'MemAvailable: 75084980 kB' 'Buffers: 2704 kB' 'Cached: 14775788 kB' 'SwapCached: 0 kB' 'Active: 11683668 kB' 'Inactive: 3702268 kB' 'Active(anon): 11229940 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610744 kB' 'Mapped: 212264 kB' 'Shmem: 10622496 kB' 'KReclaimable: 547428 kB' 'Slab: 1232532 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 685104 kB' 'KernelStack: 22976 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12734136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220312 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.144 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71217840 kB' 'MemAvailable: 75085304 kB' 'Buffers: 2704 kB' 'Cached: 14775792 kB' 'SwapCached: 0 kB' 'Active: 11682708 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228980 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609756 kB' 'Mapped: 212204 kB' 'Shmem: 10622500 kB' 'KReclaimable: 547428 kB' 'Slab: 1232052 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684624 kB' 'KernelStack: 22992 kB' 'PageTables: 9576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12734916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220344 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.145 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.146 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71216724 kB' 'MemAvailable: 75084188 kB' 'Buffers: 2704 kB' 'Cached: 14775808 kB' 'SwapCached: 0 kB' 'Active: 11682400 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228672 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609504 kB' 'Mapped: 212212 kB' 'Shmem: 10622516 kB' 'KReclaimable: 547428 kB' 'Slab: 1232120 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684692 kB' 'KernelStack: 22848 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12733940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220328 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.147 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.148 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.412 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.413 nr_hugepages=1024 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.413 resv_hugepages=0 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.413 surplus_hugepages=0 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.413 anon_hugepages=0 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71216380 kB' 'MemAvailable: 75083844 kB' 'Buffers: 2704 kB' 'Cached: 14775832 kB' 'SwapCached: 0 kB' 'Active: 11682392 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228664 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609376 kB' 'Mapped: 212196 kB' 'Shmem: 10622540 kB' 'KReclaimable: 547428 kB' 'Slab: 1232152 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684724 kB' 'KernelStack: 22928 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12732772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220312 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.413 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.414 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.415 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38281964 kB' 'MemUsed: 9786432 kB' 'SwapCached: 0 kB' 'Active: 5953552 kB' 'Inactive: 438904 kB' 'Active(anon): 5645188 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438904 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5999708 kB' 'Mapped: 72088 kB' 'AnonPages: 395864 kB' 'Shmem: 5252440 kB' 'KernelStack: 13992 kB' 'PageTables: 6256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339660 kB' 'Slab: 660228 kB' 'SReclaimable: 339660 kB' 'SUnreclaim: 320568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.416 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:17.417 node0=1024 expecting 1024 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:17.417 00:03:17.417 real 0m4.119s 00:03:17.417 user 0m1.366s 00:03:17.417 sys 0m1.992s 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.417 18:40:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:17.417 ************************************ 00:03:17.417 END TEST default_setup 00:03:17.417 ************************************ 00:03:17.417 18:40:02 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:17.417 18:40:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.417 18:40:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.417 18:40:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.417 ************************************ 00:03:17.417 START TEST per_node_1G_alloc 00:03:17.417 ************************************ 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.417 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.418 18:40:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.757 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.757 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.757 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.757 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71199016 kB' 'MemAvailable: 75066480 kB' 'Buffers: 2704 kB' 'Cached: 14775924 kB' 'SwapCached: 0 kB' 'Active: 11680696 kB' 'Inactive: 3702268 kB' 'Active(anon): 11226968 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607032 kB' 'Mapped: 211224 kB' 'Shmem: 10622632 kB' 'KReclaimable: 547428 kB' 'Slab: 1231056 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683628 kB' 'KernelStack: 23056 kB' 'PageTables: 9484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12720444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220488 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.758 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71197320 kB' 'MemAvailable: 75064784 kB' 'Buffers: 2704 kB' 'Cached: 14775928 kB' 'SwapCached: 0 kB' 'Active: 11680140 kB' 'Inactive: 3702268 kB' 'Active(anon): 11226412 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606928 kB' 'Mapped: 211164 kB' 'Shmem: 10622636 kB' 'KReclaimable: 547428 kB' 'Slab: 1231048 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683620 kB' 'KernelStack: 22976 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12722072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220472 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.759 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.760 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.761 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71198892 kB' 'MemAvailable: 75066356 kB' 'Buffers: 2704 kB' 'Cached: 14775944 kB' 'SwapCached: 0 kB' 'Active: 11680136 kB' 'Inactive: 3702268 kB' 'Active(anon): 11226408 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606896 kB' 'Mapped: 211164 kB' 'Shmem: 10622652 kB' 'KReclaimable: 547428 kB' 'Slab: 1230984 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683556 kB' 'KernelStack: 22928 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12720488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220344 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.762 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.763 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.764 nr_hugepages=1024 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.764 resv_hugepages=0 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.764 surplus_hugepages=0 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.764 anon_hugepages=0 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71196780 kB' 'MemAvailable: 75064244 kB' 'Buffers: 2704 kB' 'Cached: 14775968 kB' 'SwapCached: 0 kB' 'Active: 11680744 kB' 'Inactive: 3702268 kB' 'Active(anon): 11227016 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607512 kB' 'Mapped: 211164 kB' 'Shmem: 10622676 kB' 'KReclaimable: 547428 kB' 'Slab: 1230984 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683556 kB' 'KernelStack: 23024 kB' 'PageTables: 9260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12720524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220456 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.764 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.765 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 39321668 kB' 'MemUsed: 8746728 kB' 'SwapCached: 0 kB' 'Active: 5953936 kB' 'Inactive: 438904 kB' 'Active(anon): 5645572 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438904 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5999720 kB' 'Mapped: 71124 kB' 'AnonPages: 396228 kB' 'Shmem: 5252452 kB' 'KernelStack: 14152 kB' 'PageTables: 6504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339660 kB' 'Slab: 659236 kB' 'SReclaimable: 339660 kB' 'SUnreclaim: 319576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.767 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 31875432 kB' 'MemUsed: 12342772 kB' 'SwapCached: 0 kB' 'Active: 5726744 kB' 'Inactive: 3263364 kB' 'Active(anon): 5581380 kB' 'Inactive(anon): 0 kB' 'Active(file): 145364 kB' 'Inactive(file): 3263364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8778996 kB' 'Mapped: 140040 kB' 'AnonPages: 211168 kB' 'Shmem: 5370268 kB' 'KernelStack: 8968 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207768 kB' 'Slab: 571748 kB' 'SReclaimable: 207768 kB' 'SUnreclaim: 363980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:20.769 node0=512 expecting 512 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:20.769 node1=512 expecting 512 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:20.769 00:03:20.769 real 0m3.076s 00:03:20.769 user 0m1.224s 00:03:20.769 sys 0m1.894s 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.769 18:40:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.769 ************************************ 00:03:20.769 END TEST per_node_1G_alloc 00:03:20.769 ************************************ 00:03:20.769 18:40:05 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:20.769 18:40:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.769 18:40:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.769 18:40:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.769 ************************************ 00:03:20.769 START TEST even_2G_alloc 00:03:20.769 ************************************ 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.769 18:40:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.328 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.328 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.328 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.328 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:23.328 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.328 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.328 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.328 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.328 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.328 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71214508 kB' 'MemAvailable: 75081972 kB' 'Buffers: 2704 kB' 'Cached: 14776088 kB' 'SwapCached: 0 kB' 'Active: 11681664 kB' 'Inactive: 3702268 kB' 'Active(anon): 11227936 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607836 kB' 'Mapped: 211316 kB' 'Shmem: 10622796 kB' 'KReclaimable: 547428 kB' 'Slab: 1231172 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683744 kB' 'KernelStack: 22816 kB' 'PageTables: 9524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12720148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220408 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.329 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.330 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71215176 kB' 'MemAvailable: 75082640 kB' 'Buffers: 2704 kB' 'Cached: 14776088 kB' 'SwapCached: 0 kB' 'Active: 11681108 kB' 'Inactive: 3702268 kB' 'Active(anon): 11227380 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607268 kB' 'Mapped: 211316 kB' 'Shmem: 10622796 kB' 'KReclaimable: 547428 kB' 'Slab: 1231172 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683744 kB' 'KernelStack: 22704 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12720168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220360 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.595 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.596 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71214848 kB' 'MemAvailable: 75082312 kB' 'Buffers: 2704 kB' 'Cached: 14776108 kB' 'SwapCached: 0 kB' 'Active: 11680660 kB' 'Inactive: 3702268 kB' 'Active(anon): 11226932 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607336 kB' 'Mapped: 211184 kB' 'Shmem: 10622816 kB' 'KReclaimable: 547428 kB' 'Slab: 1231420 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683992 kB' 'KernelStack: 22816 kB' 'PageTables: 9388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12720188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220344 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.597 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.598 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.599 nr_hugepages=1024 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.599 resv_hugepages=0 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.599 surplus_hugepages=0 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.599 anon_hugepages=0 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71214492 kB' 'MemAvailable: 75081956 kB' 'Buffers: 2704 kB' 'Cached: 14776148 kB' 'SwapCached: 0 kB' 'Active: 11680344 kB' 'Inactive: 3702268 kB' 'Active(anon): 11226616 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606936 kB' 'Mapped: 211184 kB' 'Shmem: 10622856 kB' 'KReclaimable: 547428 kB' 'Slab: 1231420 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683992 kB' 'KernelStack: 22800 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12720212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220344 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.599 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.600 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 39323696 kB' 'MemUsed: 8744700 kB' 'SwapCached: 0 kB' 'Active: 5953296 kB' 'Inactive: 438904 kB' 'Active(anon): 5644932 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438904 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5999824 kB' 'Mapped: 71136 kB' 'AnonPages: 395512 kB' 'Shmem: 5252556 kB' 'KernelStack: 13848 kB' 'PageTables: 6444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339660 kB' 'Slab: 659788 kB' 'SReclaimable: 339660 kB' 'SUnreclaim: 320128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.601 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.602 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 31890944 kB' 'MemUsed: 12327260 kB' 'SwapCached: 0 kB' 'Active: 5727100 kB' 'Inactive: 3263364 kB' 'Active(anon): 5581736 kB' 'Inactive(anon): 0 kB' 'Active(file): 145364 kB' 'Inactive(file): 3263364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8779052 kB' 'Mapped: 140048 kB' 'AnonPages: 211424 kB' 'Shmem: 5370324 kB' 'KernelStack: 8952 kB' 'PageTables: 2896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207768 kB' 'Slab: 571632 kB' 'SReclaimable: 207768 kB' 'SUnreclaim: 363864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.603 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.604 node0=512 expecting 512 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:23.604 node1=512 expecting 512 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:23.604 00:03:23.604 real 0m3.061s 00:03:23.604 user 0m1.215s 00:03:23.604 sys 0m1.888s 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.604 18:40:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.604 ************************************ 00:03:23.604 END TEST even_2G_alloc 00:03:23.604 ************************************ 00:03:23.604 18:40:08 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:23.604 18:40:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.604 18:40:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.604 18:40:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.604 ************************************ 00:03:23.604 START TEST odd_alloc 00:03:23.604 ************************************ 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.604 18:40:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.904 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.904 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.904 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71208488 kB' 'MemAvailable: 75075952 kB' 'Buffers: 2704 kB' 'Cached: 14776244 kB' 'SwapCached: 0 kB' 'Active: 11681212 kB' 'Inactive: 3702268 kB' 'Active(anon): 11227484 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607900 kB' 'Mapped: 211212 kB' 'Shmem: 10622952 kB' 'KReclaimable: 547428 kB' 'Slab: 1231316 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683888 kB' 'KernelStack: 22768 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12720328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220248 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.904 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.905 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71209600 kB' 'MemAvailable: 75077064 kB' 'Buffers: 2704 kB' 'Cached: 14776248 kB' 'SwapCached: 0 kB' 'Active: 11680808 kB' 'Inactive: 3702268 kB' 'Active(anon): 11227080 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607468 kB' 'Mapped: 211212 kB' 'Shmem: 10622956 kB' 'KReclaimable: 547428 kB' 'Slab: 1231320 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683892 kB' 'KernelStack: 22736 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12720352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220232 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.906 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.907 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71210040 kB' 'MemAvailable: 75077504 kB' 'Buffers: 2704 kB' 'Cached: 14776264 kB' 'SwapCached: 0 kB' 'Active: 11681360 kB' 'Inactive: 3702268 kB' 'Active(anon): 11227632 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607976 kB' 'Mapped: 211212 kB' 'Shmem: 10622972 kB' 'KReclaimable: 547428 kB' 'Slab: 1231320 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683892 kB' 'KernelStack: 22736 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12720504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220216 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.908 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.909 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.910 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:26.911 nr_hugepages=1025 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.911 resv_hugepages=0 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.911 surplus_hugepages=0 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.911 anon_hugepages=0 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71210988 kB' 'MemAvailable: 75078452 kB' 'Buffers: 2704 kB' 'Cached: 14776316 kB' 'SwapCached: 0 kB' 'Active: 11681128 kB' 'Inactive: 3702268 kB' 'Active(anon): 11227400 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607732 kB' 'Mapped: 211212 kB' 'Shmem: 10623024 kB' 'KReclaimable: 547428 kB' 'Slab: 1231320 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 683892 kB' 'KernelStack: 22768 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12720892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220232 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.911 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.912 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 39316680 kB' 'MemUsed: 8751716 kB' 'SwapCached: 0 kB' 'Active: 5954656 kB' 'Inactive: 438904 kB' 'Active(anon): 5646292 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438904 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5999900 kB' 'Mapped: 71160 kB' 'AnonPages: 396900 kB' 'Shmem: 5252632 kB' 'KernelStack: 13816 kB' 'PageTables: 5960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339660 kB' 'Slab: 659708 kB' 'SReclaimable: 339660 kB' 'SUnreclaim: 320048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.913 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.914 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 31895008 kB' 'MemUsed: 12323196 kB' 'SwapCached: 0 kB' 'Active: 5726880 kB' 'Inactive: 3263364 kB' 'Active(anon): 5581516 kB' 'Inactive(anon): 0 kB' 'Active(file): 145364 kB' 'Inactive(file): 3263364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8779144 kB' 'Mapped: 140052 kB' 'AnonPages: 211248 kB' 'Shmem: 5370416 kB' 'KernelStack: 8984 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207768 kB' 'Slab: 571612 kB' 'SReclaimable: 207768 kB' 'SUnreclaim: 363844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.915 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:26.916 node0=512 expecting 513 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:26.916 node1=513 expecting 512 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:26.916 00:03:26.916 real 0m3.145s 00:03:26.916 user 0m1.238s 00:03:26.916 sys 0m1.951s 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.916 18:40:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.916 ************************************ 00:03:26.916 END TEST odd_alloc 00:03:26.916 ************************************ 00:03:26.916 18:40:11 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:26.916 18:40:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.916 18:40:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.916 18:40:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.916 ************************************ 00:03:26.916 START TEST custom_alloc 00:03:26.916 ************************************ 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.916 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.917 18:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.455 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.455 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.455 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:29.719 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 70171272 kB' 'MemAvailable: 74038736 kB' 'Buffers: 2704 kB' 'Cached: 14776404 kB' 'SwapCached: 0 kB' 'Active: 11682020 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228292 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608472 kB' 'Mapped: 211248 kB' 'Shmem: 10623112 kB' 'KReclaimable: 547428 kB' 'Slab: 1231744 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684316 kB' 'KernelStack: 22832 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12721364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220264 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.720 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 70171672 kB' 'MemAvailable: 74039136 kB' 'Buffers: 2704 kB' 'Cached: 14776420 kB' 'SwapCached: 0 kB' 'Active: 11681812 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228084 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608264 kB' 'Mapped: 211208 kB' 'Shmem: 10623128 kB' 'KReclaimable: 547428 kB' 'Slab: 1231760 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684332 kB' 'KernelStack: 22800 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12721384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220248 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.721 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.722 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 70172508 kB' 'MemAvailable: 74039972 kB' 'Buffers: 2704 kB' 'Cached: 14776424 kB' 'SwapCached: 0 kB' 'Active: 11682140 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228412 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608636 kB' 'Mapped: 211208 kB' 'Shmem: 10623132 kB' 'KReclaimable: 547428 kB' 'Slab: 1231760 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684332 kB' 'KernelStack: 22816 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12721404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220248 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.723 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.724 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:29.725 nr_hugepages=1536 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.725 resv_hugepages=0 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.725 surplus_hugepages=0 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.725 anon_hugepages=0 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 70172508 kB' 'MemAvailable: 74039972 kB' 'Buffers: 2704 kB' 'Cached: 14776464 kB' 'SwapCached: 0 kB' 'Active: 11681864 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228136 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608264 kB' 'Mapped: 211208 kB' 'Shmem: 10623172 kB' 'KReclaimable: 547428 kB' 'Slab: 1231760 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684332 kB' 'KernelStack: 22800 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12721424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220248 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.725 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.726 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 39333612 kB' 'MemUsed: 8734784 kB' 'SwapCached: 0 kB' 'Active: 5953312 kB' 'Inactive: 438904 kB' 'Active(anon): 5644948 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438904 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5999896 kB' 'Mapped: 71164 kB' 'AnonPages: 395488 kB' 'Shmem: 5252628 kB' 'KernelStack: 13800 kB' 'PageTables: 5964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339660 kB' 'Slab: 660008 kB' 'SReclaimable: 339660 kB' 'SUnreclaim: 320348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.989 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.990 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 30840408 kB' 'MemUsed: 13377796 kB' 'SwapCached: 0 kB' 'Active: 5728944 kB' 'Inactive: 3263364 kB' 'Active(anon): 5583580 kB' 'Inactive(anon): 0 kB' 'Active(file): 145364 kB' 'Inactive(file): 3263364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8779296 kB' 'Mapped: 140044 kB' 'AnonPages: 213156 kB' 'Shmem: 5370568 kB' 'KernelStack: 9016 kB' 'PageTables: 2688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207768 kB' 'Slab: 571752 kB' 'SReclaimable: 207768 kB' 'SUnreclaim: 363984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.991 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:29.992 node0=512 expecting 512 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:29.992 node1=1024 expecting 1024 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:29.992 00:03:29.992 real 0m2.986s 00:03:29.992 user 0m1.155s 00:03:29.992 sys 0m1.866s 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.992 18:40:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:29.992 ************************************ 00:03:29.992 END TEST custom_alloc 00:03:29.992 ************************************ 00:03:29.992 18:40:14 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:29.992 18:40:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.992 18:40:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.992 18:40:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.992 ************************************ 00:03:29.992 START TEST no_shrink_alloc 00:03:29.992 ************************************ 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.992 18:40:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.291 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.291 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.291 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71193584 kB' 'MemAvailable: 75061048 kB' 'Buffers: 2704 kB' 'Cached: 14776552 kB' 'SwapCached: 0 kB' 'Active: 11688060 kB' 'Inactive: 3702268 kB' 'Active(anon): 11234332 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614248 kB' 'Mapped: 211772 kB' 'Shmem: 10623260 kB' 'KReclaimable: 547428 kB' 'Slab: 1232276 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684848 kB' 'KernelStack: 22912 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12742956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220380 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.291 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.292 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71194988 kB' 'MemAvailable: 75062452 kB' 'Buffers: 2704 kB' 'Cached: 14776556 kB' 'SwapCached: 0 kB' 'Active: 11683176 kB' 'Inactive: 3702268 kB' 'Active(anon): 11229448 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609504 kB' 'Mapped: 211640 kB' 'Shmem: 10623264 kB' 'KReclaimable: 547428 kB' 'Slab: 1232228 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684800 kB' 'KernelStack: 22880 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12721216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220280 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.293 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.294 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71195980 kB' 'MemAvailable: 75063444 kB' 'Buffers: 2704 kB' 'Cached: 14776576 kB' 'SwapCached: 0 kB' 'Active: 11682704 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228976 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608960 kB' 'Mapped: 211228 kB' 'Shmem: 10623284 kB' 'KReclaimable: 547428 kB' 'Slab: 1232228 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684800 kB' 'KernelStack: 22864 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12721240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220280 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.295 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.296 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.297 nr_hugepages=1024 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.297 resv_hugepages=0 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.297 surplus_hugepages=0 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.297 anon_hugepages=0 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71196696 kB' 'MemAvailable: 75064160 kB' 'Buffers: 2704 kB' 'Cached: 14776616 kB' 'SwapCached: 0 kB' 'Active: 11682380 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228652 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608552 kB' 'Mapped: 211228 kB' 'Shmem: 10623324 kB' 'KReclaimable: 547428 kB' 'Slab: 1232228 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684800 kB' 'KernelStack: 22848 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12721388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220280 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.297 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.298 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38272180 kB' 'MemUsed: 9796216 kB' 'SwapCached: 0 kB' 'Active: 5955088 kB' 'Inactive: 438904 kB' 'Active(anon): 5646724 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438904 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5999928 kB' 'Mapped: 71172 kB' 'AnonPages: 397184 kB' 'Shmem: 5252660 kB' 'KernelStack: 13832 kB' 'PageTables: 5948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339660 kB' 'Slab: 660644 kB' 'SReclaimable: 339660 kB' 'SUnreclaim: 320984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.299 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.300 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.301 node0=1024 expecting 1024 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.301 18:40:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.879 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.879 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.879 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.879 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.147 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71226632 kB' 'MemAvailable: 75094096 kB' 'Buffers: 2704 kB' 'Cached: 14776692 kB' 'SwapCached: 0 kB' 'Active: 11683004 kB' 'Inactive: 3702268 kB' 'Active(anon): 11229276 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608632 kB' 'Mapped: 211232 kB' 'Shmem: 10623400 kB' 'KReclaimable: 547428 kB' 'Slab: 1231636 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684208 kB' 'KernelStack: 22944 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12721840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220328 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71228204 kB' 'MemAvailable: 75095668 kB' 'Buffers: 2704 kB' 'Cached: 14776696 kB' 'SwapCached: 0 kB' 'Active: 11682728 kB' 'Inactive: 3702268 kB' 'Active(anon): 11229000 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608792 kB' 'Mapped: 211112 kB' 'Shmem: 10623404 kB' 'KReclaimable: 547428 kB' 'Slab: 1231636 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684208 kB' 'KernelStack: 22928 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12721872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220280 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.148 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71229196 kB' 'MemAvailable: 75096660 kB' 'Buffers: 2704 kB' 'Cached: 14776700 kB' 'SwapCached: 0 kB' 'Active: 11682500 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228772 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608636 kB' 'Mapped: 211236 kB' 'Shmem: 10623408 kB' 'KReclaimable: 547428 kB' 'Slab: 1231636 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684208 kB' 'KernelStack: 22848 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12722264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220296 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.149 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.150 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.151 nr_hugepages=1024 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.151 resv_hugepages=0 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.151 surplus_hugepages=0 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.151 anon_hugepages=0 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 71228692 kB' 'MemAvailable: 75096156 kB' 'Buffers: 2704 kB' 'Cached: 14776704 kB' 'SwapCached: 0 kB' 'Active: 11682364 kB' 'Inactive: 3702268 kB' 'Active(anon): 11228636 kB' 'Inactive(anon): 0 kB' 'Active(file): 453728 kB' 'Inactive(file): 3702268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608504 kB' 'Mapped: 211236 kB' 'Shmem: 10623412 kB' 'KReclaimable: 547428 kB' 'Slab: 1231636 kB' 'SReclaimable: 547428 kB' 'SUnreclaim: 684208 kB' 'KernelStack: 22832 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12722284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220296 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4363220 kB' 'DirectMap2M: 57182208 kB' 'DirectMap1G: 39845888 kB' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.151 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38293720 kB' 'MemUsed: 9774676 kB' 'SwapCached: 0 kB' 'Active: 5955444 kB' 'Inactive: 438904 kB' 'Active(anon): 5647080 kB' 'Inactive(anon): 0 kB' 'Active(file): 308364 kB' 'Inactive(file): 438904 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5999964 kB' 'Mapped: 71180 kB' 'AnonPages: 397608 kB' 'Shmem: 5252696 kB' 'KernelStack: 13832 kB' 'PageTables: 5968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 339660 kB' 'Slab: 659976 kB' 'SReclaimable: 339660 kB' 'SUnreclaim: 320316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.152 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.153 node0=1024 expecting 1024 00:03:36.153 18:40:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.153 00:03:36.153 real 0m6.158s 00:03:36.153 user 0m2.453s 00:03:36.153 sys 0m3.792s 00:03:36.153 18:40:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.153 18:40:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.153 ************************************ 00:03:36.153 END TEST no_shrink_alloc 00:03:36.153 ************************************ 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:36.153 18:40:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:36.153 00:03:36.153 real 0m23.125s 00:03:36.153 user 0m8.901s 00:03:36.153 sys 0m13.753s 00:03:36.153 18:40:21 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.153 18:40:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.153 ************************************ 00:03:36.153 END TEST hugepages 00:03:36.153 ************************************ 00:03:36.153 18:40:21 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:36.153 18:40:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.153 18:40:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.153 18:40:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:36.153 ************************************ 00:03:36.153 START TEST driver 00:03:36.153 ************************************ 00:03:36.153 18:40:21 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:36.413 * Looking for test storage... 00:03:36.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:36.413 18:40:21 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:36.413 18:40:21 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.413 18:40:21 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.608 18:40:25 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:40.608 18:40:25 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.608 18:40:25 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.608 18:40:25 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:40.608 ************************************ 00:03:40.608 START TEST guess_driver 00:03:40.608 ************************************ 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 175 > 0 )) 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:40.608 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:40.608 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:40.608 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:40.608 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:40.608 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:40.608 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:40.608 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:40.608 Looking for driver=vfio-pci 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.608 18:40:25 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.900 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.901 18:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.470 18:40:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.470 18:40:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.470 18:40:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.470 18:40:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:44.470 18:40:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:44.470 18:40:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.470 18:40:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.665 00:03:48.665 real 0m8.165s 00:03:48.665 user 0m2.413s 00:03:48.665 sys 0m4.142s 00:03:48.665 18:40:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.665 18:40:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.665 ************************************ 00:03:48.665 END TEST guess_driver 00:03:48.665 ************************************ 00:03:48.923 00:03:48.923 real 0m12.554s 00:03:48.923 user 0m3.627s 00:03:48.923 sys 0m6.492s 00:03:48.923 18:40:33 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.923 18:40:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.923 ************************************ 00:03:48.923 END TEST driver 00:03:48.923 ************************************ 00:03:48.923 18:40:33 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:48.923 18:40:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.923 18:40:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.923 18:40:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.923 ************************************ 00:03:48.923 START TEST devices 00:03:48.923 ************************************ 00:03:48.923 18:40:33 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:48.923 * Looking for test storage... 00:03:48.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:48.923 18:40:33 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:48.923 18:40:33 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:48.923 18:40:33 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.923 18:40:33 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:52.216 18:40:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:52.216 18:40:37 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:52.216 No valid GPT data, bailing 00:03:52.216 18:40:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.216 18:40:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:52.216 18:40:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:52.216 18:40:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:52.216 18:40:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:52.216 18:40:37 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:52.216 18:40:37 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.216 18:40:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:52.216 ************************************ 00:03:52.216 START TEST nvme_mount 00:03:52.216 ************************************ 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:52.216 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:52.217 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:52.217 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:52.217 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:52.217 18:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:53.597 Creating new GPT entries in memory. 00:03:53.597 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:53.597 other utilities. 00:03:53.597 18:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:53.597 18:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.597 18:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:53.597 18:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:53.597 18:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:54.536 Creating new GPT entries in memory. 00:03:54.536 The operation has completed successfully. 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2281370 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.536 18:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:57.137 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:57.397 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.397 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:57.657 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:57.657 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:57.657 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:57.657 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.657 18:40:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:00.192 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.452 18:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.745 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.746 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:03.746 00:04:03.746 real 0m11.174s 00:04:03.746 user 0m3.303s 00:04:03.746 sys 0m5.719s 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.746 18:40:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:03.746 ************************************ 00:04:03.746 END TEST nvme_mount 00:04:03.746 ************************************ 00:04:03.746 18:40:48 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:03.746 18:40:48 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.746 18:40:48 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.746 18:40:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.746 ************************************ 00:04:03.746 START TEST dm_mount 00:04:03.746 ************************************ 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:03.746 18:40:48 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:04.685 Creating new GPT entries in memory. 00:04:04.685 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.685 other utilities. 00:04:04.685 18:40:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.685 18:40:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.685 18:40:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.685 18:40:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.685 18:40:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:05.623 Creating new GPT entries in memory. 00:04:05.623 The operation has completed successfully. 00:04:05.623 18:40:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.623 18:40:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.623 18:40:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.623 18:40:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.623 18:40:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:06.561 The operation has completed successfully. 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2285596 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.561 18:40:51 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.820 18:40:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:09.357 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.617 18:40:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:12.910 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:12.910 00:04:12.910 real 0m9.055s 00:04:12.910 user 0m2.239s 00:04:12.910 sys 0m3.853s 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.910 18:40:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:12.910 ************************************ 00:04:12.910 END TEST dm_mount 00:04:12.910 ************************************ 00:04:12.910 18:40:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.911 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:12.911 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:12.911 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:12.911 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.911 18:40:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:12.911 00:04:12.911 real 0m24.051s 00:04:12.911 user 0m6.881s 00:04:12.911 sys 0m11.936s 00:04:12.911 18:40:57 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.911 18:40:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:12.911 ************************************ 00:04:12.911 END TEST devices 00:04:12.911 ************************************ 00:04:12.911 00:04:12.911 real 1m20.973s 00:04:12.911 user 0m26.644s 00:04:12.911 sys 0m44.804s 00:04:12.911 18:40:57 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.911 18:40:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.911 ************************************ 00:04:12.911 END TEST setup.sh 00:04:12.911 ************************************ 00:04:12.911 18:40:57 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:16.253 Hugepages 00:04:16.253 node hugesize free / total 00:04:16.253 node0 1048576kB 0 / 0 00:04:16.253 node0 2048kB 2048 / 2048 00:04:16.254 node1 1048576kB 0 / 0 00:04:16.254 node1 2048kB 0 / 0 00:04:16.254 00:04:16.254 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:16.254 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:16.254 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:16.254 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:16.254 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:16.254 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:16.254 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:16.254 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:16.254 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:16.254 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:16.254 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:16.254 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:16.254 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:16.254 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:16.254 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:16.254 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:16.254 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:16.254 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:16.254 18:41:00 -- spdk/autotest.sh@130 -- # uname -s 00:04:16.254 18:41:00 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:16.254 18:41:00 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:16.254 18:41:00 -- common/autotest_common.sh@1529 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.793 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.793 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.732 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.992 18:41:04 -- common/autotest_common.sh@1530 -- # sleep 1 00:04:20.930 18:41:05 -- common/autotest_common.sh@1531 -- # bdfs=() 00:04:20.930 18:41:05 -- common/autotest_common.sh@1531 -- # local bdfs 00:04:20.930 18:41:05 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.930 18:41:05 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:04:20.930 18:41:05 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:20.930 18:41:05 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:20.930 18:41:05 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.930 18:41:05 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:20.930 18:41:05 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:20.930 18:41:05 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:20.930 18:41:05 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:86:00.0 00:04:20.930 18:41:05 -- common/autotest_common.sh@1534 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.467 Waiting for block devices as requested 00:04:23.726 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:04:23.726 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.985 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.985 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.985 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:23.985 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:24.245 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:24.245 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:24.245 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:24.504 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:24.504 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:24.504 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:24.764 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:24.764 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:24.764 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:24.764 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:25.023 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:25.023 18:41:09 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:25.023 18:41:09 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:04:25.023 18:41:09 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 00:04:25.023 18:41:09 -- common/autotest_common.sh@1500 -- # grep 0000:86:00.0/nvme/nvme 00:04:25.023 18:41:09 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:25.023 18:41:09 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:04:25.023 18:41:09 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:25.023 18:41:09 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:04:25.023 18:41:09 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:04:25.023 18:41:09 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:04:25.023 18:41:09 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:04:25.023 18:41:09 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:25.023 18:41:09 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:25.023 18:41:09 -- common/autotest_common.sh@1543 -- # oacs=' 0xe' 00:04:25.023 18:41:09 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:25.023 18:41:09 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:25.023 18:41:09 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:04:25.023 18:41:09 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:25.023 18:41:09 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:25.023 18:41:09 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:25.023 18:41:09 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:25.023 18:41:09 -- common/autotest_common.sh@1555 -- # continue 00:04:25.023 18:41:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:25.023 18:41:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.023 18:41:09 -- common/autotest_common.sh@10 -- # set +x 00:04:25.023 18:41:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:25.023 18:41:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.023 18:41:09 -- common/autotest_common.sh@10 -- # set +x 00:04:25.023 18:41:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.315 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:28.315 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:28.882 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:28.882 18:41:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:28.882 18:41:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.882 18:41:13 -- common/autotest_common.sh@10 -- # set +x 00:04:29.141 18:41:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:29.141 18:41:13 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:04:29.141 18:41:13 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.141 18:41:13 -- common/autotest_common.sh@1575 -- # bdfs=() 00:04:29.141 18:41:13 -- common/autotest_common.sh@1575 -- # local bdfs 00:04:29.141 18:41:13 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:04:29.141 18:41:13 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:29.141 18:41:13 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:29.141 18:41:13 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.141 18:41:13 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:29.141 18:41:13 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:29.141 18:41:13 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:29.141 18:41:13 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:86:00.0 00:04:29.141 18:41:13 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:29.141 18:41:13 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:04:29.141 18:41:14 -- common/autotest_common.sh@1578 -- # device=0x0a54 00:04:29.141 18:41:14 -- common/autotest_common.sh@1579 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:29.141 18:41:14 -- common/autotest_common.sh@1580 -- # bdfs+=($bdf) 00:04:29.141 18:41:14 -- common/autotest_common.sh@1584 -- # printf '%s\n' 0000:86:00.0 00:04:29.141 18:41:14 -- common/autotest_common.sh@1590 -- # [[ -z 0000:86:00.0 ]] 00:04:29.141 18:41:14 -- common/autotest_common.sh@1595 -- # spdk_tgt_pid=2294915 00:04:29.141 18:41:14 -- common/autotest_common.sh@1596 -- # waitforlisten 2294915 00:04:29.141 18:41:14 -- common/autotest_common.sh@1594 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.141 18:41:14 -- common/autotest_common.sh@829 -- # '[' -z 2294915 ']' 00:04:29.141 18:41:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.141 18:41:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:29.142 18:41:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.142 18:41:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:29.142 18:41:14 -- common/autotest_common.sh@10 -- # set +x 00:04:29.142 [2024-07-24 18:41:14.056686] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:04:29.142 [2024-07-24 18:41:14.056732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2294915 ] 00:04:29.142 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.142 [2024-07-24 18:41:14.128937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.400 [2024-07-24 18:41:14.219706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.966 18:41:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.966 18:41:14 -- common/autotest_common.sh@862 -- # return 0 00:04:29.966 18:41:14 -- common/autotest_common.sh@1598 -- # bdf_id=0 00:04:29.966 18:41:14 -- common/autotest_common.sh@1599 -- # for bdf in "${bdfs[@]}" 00:04:29.966 18:41:14 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:04:33.260 nvme0n1 00:04:33.260 18:41:18 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:33.260 [2024-07-24 18:41:18.233384] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:33.260 request: 00:04:33.260 { 00:04:33.260 "nvme_ctrlr_name": "nvme0", 00:04:33.260 "password": "test", 00:04:33.260 "method": "bdev_nvme_opal_revert", 00:04:33.260 "req_id": 1 00:04:33.260 } 00:04:33.260 Got JSON-RPC error response 00:04:33.260 response: 00:04:33.260 { 00:04:33.260 "code": -32602, 00:04:33.260 "message": "Invalid parameters" 00:04:33.260 } 00:04:33.260 18:41:18 -- common/autotest_common.sh@1602 -- # true 00:04:33.260 18:41:18 -- common/autotest_common.sh@1603 -- # (( ++bdf_id )) 00:04:33.260 18:41:18 -- common/autotest_common.sh@1606 -- # killprocess 2294915 00:04:33.260 18:41:18 -- common/autotest_common.sh@948 -- # '[' -z 2294915 ']' 00:04:33.260 18:41:18 -- common/autotest_common.sh@952 -- # kill -0 2294915 00:04:33.260 18:41:18 -- common/autotest_common.sh@953 -- # uname 00:04:33.260 18:41:18 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.260 18:41:18 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2294915 00:04:33.520 18:41:18 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.520 18:41:18 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.520 18:41:18 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2294915' 00:04:33.520 killing process with pid 2294915 00:04:33.520 18:41:18 -- common/autotest_common.sh@967 -- # kill 2294915 00:04:33.520 18:41:18 -- common/autotest_common.sh@972 -- # wait 2294915 00:04:35.428 18:41:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:35.428 18:41:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:35.428 18:41:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:35.428 18:41:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:35.428 18:41:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:35.428 18:41:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.428 18:41:19 -- common/autotest_common.sh@10 -- # set +x 00:04:35.428 18:41:20 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:35.428 18:41:20 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:35.428 18:41:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.428 18:41:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.428 18:41:20 -- common/autotest_common.sh@10 -- # set +x 00:04:35.428 ************************************ 00:04:35.428 START TEST env 00:04:35.428 ************************************ 00:04:35.428 18:41:20 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:35.428 * Looking for test storage... 00:04:35.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:35.428 18:41:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:35.428 18:41:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.428 18:41:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.428 18:41:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.428 ************************************ 00:04:35.428 START TEST env_memory 00:04:35.428 ************************************ 00:04:35.428 18:41:20 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:35.428 00:04:35.428 00:04:35.428 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.428 http://cunit.sourceforge.net/ 00:04:35.428 00:04:35.428 00:04:35.428 Suite: memory 00:04:35.428 Test: alloc and free memory map ...[2024-07-24 18:41:20.204761] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:35.428 passed 00:04:35.428 Test: mem map translation ...[2024-07-24 18:41:20.233859] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:35.428 [2024-07-24 18:41:20.233881] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:35.428 [2024-07-24 18:41:20.233935] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:35.428 [2024-07-24 18:41:20.233944] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:35.428 passed 00:04:35.428 Test: mem map registration ...[2024-07-24 18:41:20.293709] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:35.428 [2024-07-24 18:41:20.293730] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:35.428 passed 00:04:35.428 Test: mem map adjacent registrations ...passed 00:04:35.428 00:04:35.428 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.428 suites 1 1 n/a 0 0 00:04:35.428 tests 4 4 4 0 0 00:04:35.428 asserts 152 152 152 0 n/a 00:04:35.428 00:04:35.428 Elapsed time = 0.205 seconds 00:04:35.428 00:04:35.428 real 0m0.217s 00:04:35.428 user 0m0.207s 00:04:35.428 sys 0m0.010s 00:04:35.428 18:41:20 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.428 18:41:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:35.428 ************************************ 00:04:35.428 END TEST env_memory 00:04:35.428 ************************************ 00:04:35.428 18:41:20 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:35.428 18:41:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.428 18:41:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.428 18:41:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.688 ************************************ 00:04:35.688 START TEST env_vtophys 00:04:35.688 ************************************ 00:04:35.688 18:41:20 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:35.688 EAL: lib.eal log level changed from notice to debug 00:04:35.688 EAL: Detected lcore 0 as core 0 on socket 0 00:04:35.688 EAL: Detected lcore 1 as core 1 on socket 0 00:04:35.688 EAL: Detected lcore 2 as core 2 on socket 0 00:04:35.688 EAL: Detected lcore 3 as core 3 on socket 0 00:04:35.688 EAL: Detected lcore 4 as core 4 on socket 0 00:04:35.688 EAL: Detected lcore 5 as core 5 on socket 0 00:04:35.688 EAL: Detected lcore 6 as core 6 on socket 0 00:04:35.688 EAL: Detected lcore 7 as core 8 on socket 0 00:04:35.688 EAL: Detected lcore 8 as core 9 on socket 0 00:04:35.688 EAL: Detected lcore 9 as core 10 on socket 0 00:04:35.688 EAL: Detected lcore 10 as core 11 on socket 0 00:04:35.688 EAL: Detected lcore 11 as core 12 on socket 0 00:04:35.688 EAL: Detected lcore 12 as core 13 on socket 0 00:04:35.688 EAL: Detected lcore 13 as core 14 on socket 0 00:04:35.688 EAL: Detected lcore 14 as core 16 on socket 0 00:04:35.688 EAL: Detected lcore 15 as core 17 on socket 0 00:04:35.688 EAL: Detected lcore 16 as core 18 on socket 0 00:04:35.688 EAL: Detected lcore 17 as core 19 on socket 0 00:04:35.688 EAL: Detected lcore 18 as core 20 on socket 0 00:04:35.688 EAL: Detected lcore 19 as core 21 on socket 0 00:04:35.688 EAL: Detected lcore 20 as core 22 on socket 0 00:04:35.688 EAL: Detected lcore 21 as core 24 on socket 0 00:04:35.688 EAL: Detected lcore 22 as core 25 on socket 0 00:04:35.688 EAL: Detected lcore 23 as core 26 on socket 0 00:04:35.688 EAL: Detected lcore 24 as core 27 on socket 0 00:04:35.688 EAL: Detected lcore 25 as core 28 on socket 0 00:04:35.688 EAL: Detected lcore 26 as core 29 on socket 0 00:04:35.688 EAL: Detected lcore 27 as core 30 on socket 0 00:04:35.688 EAL: Detected lcore 28 as core 0 on socket 1 00:04:35.688 EAL: Detected lcore 29 as core 1 on socket 1 00:04:35.688 EAL: Detected lcore 30 as core 2 on socket 1 00:04:35.688 EAL: Detected lcore 31 as core 3 on socket 1 00:04:35.688 EAL: Detected lcore 32 as core 4 on socket 1 00:04:35.688 EAL: Detected lcore 33 as core 5 on socket 1 00:04:35.688 EAL: Detected lcore 34 as core 6 on socket 1 00:04:35.688 EAL: Detected lcore 35 as core 8 on socket 1 00:04:35.688 EAL: Detected lcore 36 as core 9 on socket 1 00:04:35.688 EAL: Detected lcore 37 as core 10 on socket 1 00:04:35.688 EAL: Detected lcore 38 as core 11 on socket 1 00:04:35.688 EAL: Detected lcore 39 as core 12 on socket 1 00:04:35.688 EAL: Detected lcore 40 as core 13 on socket 1 00:04:35.688 EAL: Detected lcore 41 as core 14 on socket 1 00:04:35.688 EAL: Detected lcore 42 as core 16 on socket 1 00:04:35.688 EAL: Detected lcore 43 as core 17 on socket 1 00:04:35.688 EAL: Detected lcore 44 as core 18 on socket 1 00:04:35.688 EAL: Detected lcore 45 as core 19 on socket 1 00:04:35.688 EAL: Detected lcore 46 as core 20 on socket 1 00:04:35.688 EAL: Detected lcore 47 as core 21 on socket 1 00:04:35.689 EAL: Detected lcore 48 as core 22 on socket 1 00:04:35.689 EAL: Detected lcore 49 as core 24 on socket 1 00:04:35.689 EAL: Detected lcore 50 as core 25 on socket 1 00:04:35.689 EAL: Detected lcore 51 as core 26 on socket 1 00:04:35.689 EAL: Detected lcore 52 as core 27 on socket 1 00:04:35.689 EAL: Detected lcore 53 as core 28 on socket 1 00:04:35.689 EAL: Detected lcore 54 as core 29 on socket 1 00:04:35.689 EAL: Detected lcore 55 as core 30 on socket 1 00:04:35.689 EAL: Detected lcore 56 as core 0 on socket 0 00:04:35.689 EAL: Detected lcore 57 as core 1 on socket 0 00:04:35.689 EAL: Detected lcore 58 as core 2 on socket 0 00:04:35.689 EAL: Detected lcore 59 as core 3 on socket 0 00:04:35.689 EAL: Detected lcore 60 as core 4 on socket 0 00:04:35.689 EAL: Detected lcore 61 as core 5 on socket 0 00:04:35.689 EAL: Detected lcore 62 as core 6 on socket 0 00:04:35.689 EAL: Detected lcore 63 as core 8 on socket 0 00:04:35.689 EAL: Detected lcore 64 as core 9 on socket 0 00:04:35.689 EAL: Detected lcore 65 as core 10 on socket 0 00:04:35.689 EAL: Detected lcore 66 as core 11 on socket 0 00:04:35.689 EAL: Detected lcore 67 as core 12 on socket 0 00:04:35.689 EAL: Detected lcore 68 as core 13 on socket 0 00:04:35.689 EAL: Detected lcore 69 as core 14 on socket 0 00:04:35.689 EAL: Detected lcore 70 as core 16 on socket 0 00:04:35.689 EAL: Detected lcore 71 as core 17 on socket 0 00:04:35.689 EAL: Detected lcore 72 as core 18 on socket 0 00:04:35.689 EAL: Detected lcore 73 as core 19 on socket 0 00:04:35.689 EAL: Detected lcore 74 as core 20 on socket 0 00:04:35.689 EAL: Detected lcore 75 as core 21 on socket 0 00:04:35.689 EAL: Detected lcore 76 as core 22 on socket 0 00:04:35.689 EAL: Detected lcore 77 as core 24 on socket 0 00:04:35.689 EAL: Detected lcore 78 as core 25 on socket 0 00:04:35.689 EAL: Detected lcore 79 as core 26 on socket 0 00:04:35.689 EAL: Detected lcore 80 as core 27 on socket 0 00:04:35.689 EAL: Detected lcore 81 as core 28 on socket 0 00:04:35.689 EAL: Detected lcore 82 as core 29 on socket 0 00:04:35.689 EAL: Detected lcore 83 as core 30 on socket 0 00:04:35.689 EAL: Detected lcore 84 as core 0 on socket 1 00:04:35.689 EAL: Detected lcore 85 as core 1 on socket 1 00:04:35.689 EAL: Detected lcore 86 as core 2 on socket 1 00:04:35.689 EAL: Detected lcore 87 as core 3 on socket 1 00:04:35.689 EAL: Detected lcore 88 as core 4 on socket 1 00:04:35.689 EAL: Detected lcore 89 as core 5 on socket 1 00:04:35.689 EAL: Detected lcore 90 as core 6 on socket 1 00:04:35.689 EAL: Detected lcore 91 as core 8 on socket 1 00:04:35.689 EAL: Detected lcore 92 as core 9 on socket 1 00:04:35.689 EAL: Detected lcore 93 as core 10 on socket 1 00:04:35.689 EAL: Detected lcore 94 as core 11 on socket 1 00:04:35.689 EAL: Detected lcore 95 as core 12 on socket 1 00:04:35.689 EAL: Detected lcore 96 as core 13 on socket 1 00:04:35.689 EAL: Detected lcore 97 as core 14 on socket 1 00:04:35.689 EAL: Detected lcore 98 as core 16 on socket 1 00:04:35.689 EAL: Detected lcore 99 as core 17 on socket 1 00:04:35.689 EAL: Detected lcore 100 as core 18 on socket 1 00:04:35.689 EAL: Detected lcore 101 as core 19 on socket 1 00:04:35.689 EAL: Detected lcore 102 as core 20 on socket 1 00:04:35.689 EAL: Detected lcore 103 as core 21 on socket 1 00:04:35.689 EAL: Detected lcore 104 as core 22 on socket 1 00:04:35.689 EAL: Detected lcore 105 as core 24 on socket 1 00:04:35.689 EAL: Detected lcore 106 as core 25 on socket 1 00:04:35.689 EAL: Detected lcore 107 as core 26 on socket 1 00:04:35.689 EAL: Detected lcore 108 as core 27 on socket 1 00:04:35.689 EAL: Detected lcore 109 as core 28 on socket 1 00:04:35.689 EAL: Detected lcore 110 as core 29 on socket 1 00:04:35.689 EAL: Detected lcore 111 as core 30 on socket 1 00:04:35.689 EAL: Maximum logical cores by configuration: 128 00:04:35.689 EAL: Detected CPU lcores: 112 00:04:35.689 EAL: Detected NUMA nodes: 2 00:04:35.689 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:35.689 EAL: Detected shared linkage of DPDK 00:04:35.689 EAL: No shared files mode enabled, IPC will be disabled 00:04:35.689 EAL: Bus pci wants IOVA as 'DC' 00:04:35.689 EAL: Buses did not request a specific IOVA mode. 00:04:35.689 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:35.689 EAL: Selected IOVA mode 'VA' 00:04:35.689 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.689 EAL: Probing VFIO support... 00:04:35.689 EAL: IOMMU type 1 (Type 1) is supported 00:04:35.689 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:35.689 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:35.689 EAL: VFIO support initialized 00:04:35.689 EAL: Ask a virtual area of 0x2e000 bytes 00:04:35.689 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:35.689 EAL: Setting up physically contiguous memory... 00:04:35.689 EAL: Setting maximum number of open files to 524288 00:04:35.689 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:35.689 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:35.689 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:35.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.689 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:35.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.689 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:35.689 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:35.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.689 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:35.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.689 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:35.689 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:35.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.689 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:35.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.689 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:35.689 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:35.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.689 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:35.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.689 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:35.689 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:35.689 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:35.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.689 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:35.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.689 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:35.689 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:35.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.689 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:35.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.689 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:35.689 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:35.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.689 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:35.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.689 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:35.689 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:35.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.689 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:35.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.689 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:35.689 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:35.689 EAL: Hugepages will be freed exactly as allocated. 00:04:35.689 EAL: No shared files mode enabled, IPC is disabled 00:04:35.689 EAL: No shared files mode enabled, IPC is disabled 00:04:35.689 EAL: TSC frequency is ~2200000 KHz 00:04:35.689 EAL: Main lcore 0 is ready (tid=7f16c1e3ba00;cpuset=[0]) 00:04:35.689 EAL: Trying to obtain current memory policy. 00:04:35.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.689 EAL: Restoring previous memory policy: 0 00:04:35.689 EAL: request: mp_malloc_sync 00:04:35.689 EAL: No shared files mode enabled, IPC is disabled 00:04:35.689 EAL: Heap on socket 0 was expanded by 2MB 00:04:35.689 EAL: No shared files mode enabled, IPC is disabled 00:04:35.689 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:35.689 EAL: Mem event callback 'spdk:(nil)' registered 00:04:35.689 00:04:35.689 00:04:35.689 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.689 http://cunit.sourceforge.net/ 00:04:35.689 00:04:35.689 00:04:35.689 Suite: components_suite 00:04:35.689 Test: vtophys_malloc_test ...passed 00:04:35.689 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:35.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.689 EAL: Restoring previous memory policy: 4 00:04:35.689 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.689 EAL: request: mp_malloc_sync 00:04:35.689 EAL: No shared files mode enabled, IPC is disabled 00:04:35.689 EAL: Heap on socket 0 was expanded by 4MB 00:04:35.689 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.689 EAL: request: mp_malloc_sync 00:04:35.689 EAL: No shared files mode enabled, IPC is disabled 00:04:35.689 EAL: Heap on socket 0 was shrunk by 4MB 00:04:35.689 EAL: Trying to obtain current memory policy. 00:04:35.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.689 EAL: Restoring previous memory policy: 4 00:04:35.689 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.689 EAL: request: mp_malloc_sync 00:04:35.689 EAL: No shared files mode enabled, IPC is disabled 00:04:35.689 EAL: Heap on socket 0 was expanded by 6MB 00:04:35.689 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.689 EAL: request: mp_malloc_sync 00:04:35.689 EAL: No shared files mode enabled, IPC is disabled 00:04:35.689 EAL: Heap on socket 0 was shrunk by 6MB 00:04:35.689 EAL: Trying to obtain current memory policy. 00:04:35.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.690 EAL: Restoring previous memory policy: 4 00:04:35.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.690 EAL: request: mp_malloc_sync 00:04:35.690 EAL: No shared files mode enabled, IPC is disabled 00:04:35.690 EAL: Heap on socket 0 was expanded by 10MB 00:04:35.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.690 EAL: request: mp_malloc_sync 00:04:35.690 EAL: No shared files mode enabled, IPC is disabled 00:04:35.690 EAL: Heap on socket 0 was shrunk by 10MB 00:04:35.690 EAL: Trying to obtain current memory policy. 00:04:35.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.690 EAL: Restoring previous memory policy: 4 00:04:35.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.690 EAL: request: mp_malloc_sync 00:04:35.690 EAL: No shared files mode enabled, IPC is disabled 00:04:35.690 EAL: Heap on socket 0 was expanded by 18MB 00:04:35.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.690 EAL: request: mp_malloc_sync 00:04:35.690 EAL: No shared files mode enabled, IPC is disabled 00:04:35.690 EAL: Heap on socket 0 was shrunk by 18MB 00:04:35.690 EAL: Trying to obtain current memory policy. 00:04:35.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.690 EAL: Restoring previous memory policy: 4 00:04:35.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.690 EAL: request: mp_malloc_sync 00:04:35.690 EAL: No shared files mode enabled, IPC is disabled 00:04:35.690 EAL: Heap on socket 0 was expanded by 34MB 00:04:35.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.690 EAL: request: mp_malloc_sync 00:04:35.690 EAL: No shared files mode enabled, IPC is disabled 00:04:35.690 EAL: Heap on socket 0 was shrunk by 34MB 00:04:35.690 EAL: Trying to obtain current memory policy. 00:04:35.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.690 EAL: Restoring previous memory policy: 4 00:04:35.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.690 EAL: request: mp_malloc_sync 00:04:35.690 EAL: No shared files mode enabled, IPC is disabled 00:04:35.690 EAL: Heap on socket 0 was expanded by 66MB 00:04:35.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.690 EAL: request: mp_malloc_sync 00:04:35.690 EAL: No shared files mode enabled, IPC is disabled 00:04:35.690 EAL: Heap on socket 0 was shrunk by 66MB 00:04:35.690 EAL: Trying to obtain current memory policy. 00:04:35.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.690 EAL: Restoring previous memory policy: 4 00:04:35.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.690 EAL: request: mp_malloc_sync 00:04:35.690 EAL: No shared files mode enabled, IPC is disabled 00:04:35.690 EAL: Heap on socket 0 was expanded by 130MB 00:04:35.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.690 EAL: request: mp_malloc_sync 00:04:35.690 EAL: No shared files mode enabled, IPC is disabled 00:04:35.690 EAL: Heap on socket 0 was shrunk by 130MB 00:04:35.690 EAL: Trying to obtain current memory policy. 00:04:35.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.949 EAL: Restoring previous memory policy: 4 00:04:35.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.949 EAL: request: mp_malloc_sync 00:04:35.949 EAL: No shared files mode enabled, IPC is disabled 00:04:35.949 EAL: Heap on socket 0 was expanded by 258MB 00:04:35.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.949 EAL: request: mp_malloc_sync 00:04:35.949 EAL: No shared files mode enabled, IPC is disabled 00:04:35.949 EAL: Heap on socket 0 was shrunk by 258MB 00:04:35.949 EAL: Trying to obtain current memory policy. 00:04:35.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.949 EAL: Restoring previous memory policy: 4 00:04:35.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.949 EAL: request: mp_malloc_sync 00:04:35.949 EAL: No shared files mode enabled, IPC is disabled 00:04:35.949 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.208 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.208 EAL: request: mp_malloc_sync 00:04:36.208 EAL: No shared files mode enabled, IPC is disabled 00:04:36.209 EAL: Heap on socket 0 was shrunk by 514MB 00:04:36.209 EAL: Trying to obtain current memory policy. 00:04:36.209 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.468 EAL: Restoring previous memory policy: 4 00:04:36.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.468 EAL: request: mp_malloc_sync 00:04:36.468 EAL: No shared files mode enabled, IPC is disabled 00:04:36.468 EAL: Heap on socket 0 was expanded by 1026MB 00:04:36.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.728 EAL: request: mp_malloc_sync 00:04:36.728 EAL: No shared files mode enabled, IPC is disabled 00:04:36.728 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:36.728 passed 00:04:36.728 00:04:36.728 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.728 suites 1 1 n/a 0 0 00:04:36.728 tests 2 2 2 0 0 00:04:36.728 asserts 497 497 497 0 n/a 00:04:36.728 00:04:36.728 Elapsed time = 1.020 seconds 00:04:36.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.728 EAL: request: mp_malloc_sync 00:04:36.728 EAL: No shared files mode enabled, IPC is disabled 00:04:36.728 EAL: Heap on socket 0 was shrunk by 2MB 00:04:36.728 EAL: No shared files mode enabled, IPC is disabled 00:04:36.728 EAL: No shared files mode enabled, IPC is disabled 00:04:36.728 EAL: No shared files mode enabled, IPC is disabled 00:04:36.728 00:04:36.728 real 0m1.166s 00:04:36.728 user 0m0.675s 00:04:36.728 sys 0m0.455s 00:04:36.728 18:41:21 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.728 18:41:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:36.728 ************************************ 00:04:36.728 END TEST env_vtophys 00:04:36.728 ************************************ 00:04:36.728 18:41:21 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:36.728 18:41:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.728 18:41:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.728 18:41:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.728 ************************************ 00:04:36.728 START TEST env_pci 00:04:36.728 ************************************ 00:04:36.728 18:41:21 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:36.728 00:04:36.728 00:04:36.728 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.728 http://cunit.sourceforge.net/ 00:04:36.728 00:04:36.728 00:04:36.728 Suite: pci 00:04:36.728 Test: pci_hook ...[2024-07-24 18:41:21.695358] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2296424 has claimed it 00:04:36.988 EAL: Cannot find device (10000:00:01.0) 00:04:36.988 EAL: Failed to attach device on primary process 00:04:36.988 passed 00:04:36.988 00:04:36.988 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.988 suites 1 1 n/a 0 0 00:04:36.988 tests 1 1 1 0 0 00:04:36.988 asserts 25 25 25 0 n/a 00:04:36.988 00:04:36.988 Elapsed time = 0.050 seconds 00:04:36.988 00:04:36.988 real 0m0.072s 00:04:36.988 user 0m0.022s 00:04:36.988 sys 0m0.050s 00:04:36.988 18:41:21 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.988 18:41:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:36.988 ************************************ 00:04:36.988 END TEST env_pci 00:04:36.988 ************************************ 00:04:36.988 18:41:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:36.988 18:41:21 env -- env/env.sh@15 -- # uname 00:04:36.988 18:41:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:36.988 18:41:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:36.988 18:41:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.988 18:41:21 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:36.988 18:41:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.988 18:41:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.988 ************************************ 00:04:36.988 START TEST env_dpdk_post_init 00:04:36.988 ************************************ 00:04:36.988 18:41:21 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.988 EAL: Detected CPU lcores: 112 00:04:36.988 EAL: Detected NUMA nodes: 2 00:04:36.988 EAL: Detected shared linkage of DPDK 00:04:36.988 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.988 EAL: Selected IOVA mode 'VA' 00:04:36.988 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.988 EAL: VFIO support initialized 00:04:36.988 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.988 EAL: Using IOMMU type 1 (Type 1) 00:04:36.988 EAL: Ignore mapping IO port bar(1) 00:04:36.988 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:36.988 EAL: Ignore mapping IO port bar(1) 00:04:36.988 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:37.248 EAL: Ignore mapping IO port bar(1) 00:04:37.248 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:38.186 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:04:41.476 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:04:41.476 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:04:41.476 Starting DPDK initialization... 00:04:41.476 Starting SPDK post initialization... 00:04:41.476 SPDK NVMe probe 00:04:41.476 Attaching to 0000:86:00.0 00:04:41.476 Attached to 0000:86:00.0 00:04:41.476 Cleaning up... 00:04:41.476 00:04:41.476 real 0m4.429s 00:04:41.476 user 0m3.320s 00:04:41.476 sys 0m0.161s 00:04:41.476 18:41:26 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.476 18:41:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.476 ************************************ 00:04:41.476 END TEST env_dpdk_post_init 00:04:41.476 ************************************ 00:04:41.476 18:41:26 env -- env/env.sh@26 -- # uname 00:04:41.476 18:41:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.476 18:41:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.476 18:41:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.476 18:41:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.476 18:41:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.476 ************************************ 00:04:41.476 START TEST env_mem_callbacks 00:04:41.476 ************************************ 00:04:41.476 18:41:26 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.476 EAL: Detected CPU lcores: 112 00:04:41.476 EAL: Detected NUMA nodes: 2 00:04:41.476 EAL: Detected shared linkage of DPDK 00:04:41.476 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.476 EAL: Selected IOVA mode 'VA' 00:04:41.476 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.476 EAL: VFIO support initialized 00:04:41.476 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.476 00:04:41.476 00:04:41.476 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.476 http://cunit.sourceforge.net/ 00:04:41.476 00:04:41.476 00:04:41.476 Suite: memory 00:04:41.476 Test: test ... 00:04:41.476 register 0x200000200000 2097152 00:04:41.476 malloc 3145728 00:04:41.476 register 0x200000400000 4194304 00:04:41.476 buf 0x200000500000 len 3145728 PASSED 00:04:41.476 malloc 64 00:04:41.476 buf 0x2000004fff40 len 64 PASSED 00:04:41.476 malloc 4194304 00:04:41.476 register 0x200000800000 6291456 00:04:41.476 buf 0x200000a00000 len 4194304 PASSED 00:04:41.476 free 0x200000500000 3145728 00:04:41.476 free 0x2000004fff40 64 00:04:41.476 unregister 0x200000400000 4194304 PASSED 00:04:41.476 free 0x200000a00000 4194304 00:04:41.476 unregister 0x200000800000 6291456 PASSED 00:04:41.476 malloc 8388608 00:04:41.476 register 0x200000400000 10485760 00:04:41.476 buf 0x200000600000 len 8388608 PASSED 00:04:41.476 free 0x200000600000 8388608 00:04:41.476 unregister 0x200000400000 10485760 PASSED 00:04:41.476 passed 00:04:41.476 00:04:41.476 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.476 suites 1 1 n/a 0 0 00:04:41.476 tests 1 1 1 0 0 00:04:41.476 asserts 15 15 15 0 n/a 00:04:41.476 00:04:41.476 Elapsed time = 0.007 seconds 00:04:41.476 00:04:41.476 real 0m0.061s 00:04:41.476 user 0m0.018s 00:04:41.476 sys 0m0.043s 00:04:41.476 18:41:26 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.476 18:41:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:41.476 ************************************ 00:04:41.476 END TEST env_mem_callbacks 00:04:41.476 ************************************ 00:04:41.476 00:04:41.476 real 0m6.389s 00:04:41.476 user 0m4.417s 00:04:41.476 sys 0m1.019s 00:04:41.476 18:41:26 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.476 18:41:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.476 ************************************ 00:04:41.476 END TEST env 00:04:41.476 ************************************ 00:04:41.476 18:41:26 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:41.476 18:41:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.476 18:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.477 18:41:26 -- common/autotest_common.sh@10 -- # set +x 00:04:41.736 ************************************ 00:04:41.736 START TEST rpc 00:04:41.736 ************************************ 00:04:41.736 18:41:26 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:41.736 * Looking for test storage... 00:04:41.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:41.736 18:41:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2297340 00:04:41.736 18:41:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.736 18:41:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:41.736 18:41:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2297340 00:04:41.736 18:41:26 rpc -- common/autotest_common.sh@829 -- # '[' -z 2297340 ']' 00:04:41.736 18:41:26 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.736 18:41:26 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.736 18:41:26 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.736 18:41:26 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.736 18:41:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.736 [2024-07-24 18:41:26.642744] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:04:41.736 [2024-07-24 18:41:26.642804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297340 ] 00:04:41.736 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.736 [2024-07-24 18:41:26.724785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.995 [2024-07-24 18:41:26.814366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:41.995 [2024-07-24 18:41:26.814413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2297340' to capture a snapshot of events at runtime. 00:04:41.995 [2024-07-24 18:41:26.814424] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:41.995 [2024-07-24 18:41:26.814434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:41.995 [2024-07-24 18:41:26.814441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2297340 for offline analysis/debug. 00:04:41.995 [2024-07-24 18:41:26.814466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.255 18:41:27 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.255 18:41:27 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:42.255 18:41:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.255 18:41:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.255 18:41:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:42.255 18:41:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:42.255 18:41:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.255 18:41:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.255 18:41:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.255 ************************************ 00:04:42.255 START TEST rpc_integrity 00:04:42.255 ************************************ 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:42.255 { 00:04:42.255 "name": "Malloc0", 00:04:42.255 "aliases": [ 00:04:42.255 "338e0c6c-528e-4d1f-8cb4-b39990c288a3" 00:04:42.255 ], 00:04:42.255 "product_name": "Malloc disk", 00:04:42.255 "block_size": 512, 00:04:42.255 "num_blocks": 16384, 00:04:42.255 "uuid": "338e0c6c-528e-4d1f-8cb4-b39990c288a3", 00:04:42.255 "assigned_rate_limits": { 00:04:42.255 "rw_ios_per_sec": 0, 00:04:42.255 "rw_mbytes_per_sec": 0, 00:04:42.255 "r_mbytes_per_sec": 0, 00:04:42.255 "w_mbytes_per_sec": 0 00:04:42.255 }, 00:04:42.255 "claimed": false, 00:04:42.255 "zoned": false, 00:04:42.255 "supported_io_types": { 00:04:42.255 "read": true, 00:04:42.255 "write": true, 00:04:42.255 "unmap": true, 00:04:42.255 "flush": true, 00:04:42.255 "reset": true, 00:04:42.255 "nvme_admin": false, 00:04:42.255 "nvme_io": false, 00:04:42.255 "nvme_io_md": false, 00:04:42.255 "write_zeroes": true, 00:04:42.255 "zcopy": true, 00:04:42.255 "get_zone_info": false, 00:04:42.255 "zone_management": false, 00:04:42.255 "zone_append": false, 00:04:42.255 "compare": false, 00:04:42.255 "compare_and_write": false, 00:04:42.255 "abort": true, 00:04:42.255 "seek_hole": false, 00:04:42.255 "seek_data": false, 00:04:42.255 "copy": true, 00:04:42.255 "nvme_iov_md": false 00:04:42.255 }, 00:04:42.255 "memory_domains": [ 00:04:42.255 { 00:04:42.255 "dma_device_id": "system", 00:04:42.255 "dma_device_type": 1 00:04:42.255 }, 00:04:42.255 { 00:04:42.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.255 "dma_device_type": 2 00:04:42.255 } 00:04:42.255 ], 00:04:42.255 "driver_specific": {} 00:04:42.255 } 00:04:42.255 ]' 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.255 [2024-07-24 18:41:27.206812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:42.255 [2024-07-24 18:41:27.206853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:42.255 [2024-07-24 18:41:27.206870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x240bc40 00:04:42.255 [2024-07-24 18:41:27.206879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:42.255 [2024-07-24 18:41:27.208434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:42.255 [2024-07-24 18:41:27.208460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:42.255 Passthru0 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.255 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.255 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:42.255 { 00:04:42.255 "name": "Malloc0", 00:04:42.255 "aliases": [ 00:04:42.255 "338e0c6c-528e-4d1f-8cb4-b39990c288a3" 00:04:42.255 ], 00:04:42.255 "product_name": "Malloc disk", 00:04:42.255 "block_size": 512, 00:04:42.255 "num_blocks": 16384, 00:04:42.255 "uuid": "338e0c6c-528e-4d1f-8cb4-b39990c288a3", 00:04:42.255 "assigned_rate_limits": { 00:04:42.255 "rw_ios_per_sec": 0, 00:04:42.255 "rw_mbytes_per_sec": 0, 00:04:42.255 "r_mbytes_per_sec": 0, 00:04:42.255 "w_mbytes_per_sec": 0 00:04:42.255 }, 00:04:42.255 "claimed": true, 00:04:42.255 "claim_type": "exclusive_write", 00:04:42.255 "zoned": false, 00:04:42.255 "supported_io_types": { 00:04:42.255 "read": true, 00:04:42.255 "write": true, 00:04:42.255 "unmap": true, 00:04:42.255 "flush": true, 00:04:42.255 "reset": true, 00:04:42.255 "nvme_admin": false, 00:04:42.255 "nvme_io": false, 00:04:42.255 "nvme_io_md": false, 00:04:42.255 "write_zeroes": true, 00:04:42.255 "zcopy": true, 00:04:42.255 "get_zone_info": false, 00:04:42.255 "zone_management": false, 00:04:42.255 "zone_append": false, 00:04:42.255 "compare": false, 00:04:42.255 "compare_and_write": false, 00:04:42.255 "abort": true, 00:04:42.255 "seek_hole": false, 00:04:42.255 "seek_data": false, 00:04:42.255 "copy": true, 00:04:42.255 "nvme_iov_md": false 00:04:42.255 }, 00:04:42.255 "memory_domains": [ 00:04:42.255 { 00:04:42.255 "dma_device_id": "system", 00:04:42.255 "dma_device_type": 1 00:04:42.255 }, 00:04:42.255 { 00:04:42.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.255 "dma_device_type": 2 00:04:42.255 } 00:04:42.255 ], 00:04:42.255 "driver_specific": {} 00:04:42.255 }, 00:04:42.255 { 00:04:42.255 "name": "Passthru0", 00:04:42.255 "aliases": [ 00:04:42.255 "47fb49f0-42f8-588e-ab5e-f983bd27bf65" 00:04:42.255 ], 00:04:42.255 "product_name": "passthru", 00:04:42.255 "block_size": 512, 00:04:42.255 "num_blocks": 16384, 00:04:42.256 "uuid": "47fb49f0-42f8-588e-ab5e-f983bd27bf65", 00:04:42.256 "assigned_rate_limits": { 00:04:42.256 "rw_ios_per_sec": 0, 00:04:42.256 "rw_mbytes_per_sec": 0, 00:04:42.256 "r_mbytes_per_sec": 0, 00:04:42.256 "w_mbytes_per_sec": 0 00:04:42.256 }, 00:04:42.256 "claimed": false, 00:04:42.256 "zoned": false, 00:04:42.256 "supported_io_types": { 00:04:42.256 "read": true, 00:04:42.256 "write": true, 00:04:42.256 "unmap": true, 00:04:42.256 "flush": true, 00:04:42.256 "reset": true, 00:04:42.256 "nvme_admin": false, 00:04:42.256 "nvme_io": false, 00:04:42.256 "nvme_io_md": false, 00:04:42.256 "write_zeroes": true, 00:04:42.256 "zcopy": true, 00:04:42.256 "get_zone_info": false, 00:04:42.256 "zone_management": false, 00:04:42.256 "zone_append": false, 00:04:42.256 "compare": false, 00:04:42.256 "compare_and_write": false, 00:04:42.256 "abort": true, 00:04:42.256 "seek_hole": false, 00:04:42.256 "seek_data": false, 00:04:42.256 "copy": true, 00:04:42.256 "nvme_iov_md": false 00:04:42.256 }, 00:04:42.256 "memory_domains": [ 00:04:42.256 { 00:04:42.256 "dma_device_id": "system", 00:04:42.256 "dma_device_type": 1 00:04:42.256 }, 00:04:42.256 { 00:04:42.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.256 "dma_device_type": 2 00:04:42.256 } 00:04:42.256 ], 00:04:42.256 "driver_specific": { 00:04:42.256 "passthru": { 00:04:42.256 "name": "Passthru0", 00:04:42.256 "base_bdev_name": "Malloc0" 00:04:42.256 } 00:04:42.256 } 00:04:42.256 } 00:04:42.256 ]' 00:04:42.256 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:42.516 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:42.516 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.516 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.516 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.516 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:42.516 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:42.516 18:41:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:42.516 00:04:42.516 real 0m0.298s 00:04:42.516 user 0m0.182s 00:04:42.516 sys 0m0.047s 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.516 18:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.516 ************************************ 00:04:42.516 END TEST rpc_integrity 00:04:42.516 ************************************ 00:04:42.516 18:41:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:42.516 18:41:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.516 18:41:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.516 18:41:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.516 ************************************ 00:04:42.516 START TEST rpc_plugins 00:04:42.516 ************************************ 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:42.516 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.516 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:42.516 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.516 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:42.516 { 00:04:42.516 "name": "Malloc1", 00:04:42.516 "aliases": [ 00:04:42.516 "c5c01bbc-53a2-465d-b240-1bb4d37ac6bb" 00:04:42.516 ], 00:04:42.516 "product_name": "Malloc disk", 00:04:42.516 "block_size": 4096, 00:04:42.516 "num_blocks": 256, 00:04:42.516 "uuid": "c5c01bbc-53a2-465d-b240-1bb4d37ac6bb", 00:04:42.516 "assigned_rate_limits": { 00:04:42.516 "rw_ios_per_sec": 0, 00:04:42.516 "rw_mbytes_per_sec": 0, 00:04:42.516 "r_mbytes_per_sec": 0, 00:04:42.516 "w_mbytes_per_sec": 0 00:04:42.516 }, 00:04:42.516 "claimed": false, 00:04:42.516 "zoned": false, 00:04:42.516 "supported_io_types": { 00:04:42.516 "read": true, 00:04:42.516 "write": true, 00:04:42.516 "unmap": true, 00:04:42.516 "flush": true, 00:04:42.516 "reset": true, 00:04:42.516 "nvme_admin": false, 00:04:42.516 "nvme_io": false, 00:04:42.516 "nvme_io_md": false, 00:04:42.516 "write_zeroes": true, 00:04:42.516 "zcopy": true, 00:04:42.516 "get_zone_info": false, 00:04:42.516 "zone_management": false, 00:04:42.516 "zone_append": false, 00:04:42.516 "compare": false, 00:04:42.516 "compare_and_write": false, 00:04:42.516 "abort": true, 00:04:42.516 "seek_hole": false, 00:04:42.516 "seek_data": false, 00:04:42.516 "copy": true, 00:04:42.516 "nvme_iov_md": false 00:04:42.516 }, 00:04:42.516 "memory_domains": [ 00:04:42.516 { 00:04:42.516 "dma_device_id": "system", 00:04:42.516 "dma_device_type": 1 00:04:42.516 }, 00:04:42.516 { 00:04:42.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.516 "dma_device_type": 2 00:04:42.516 } 00:04:42.516 ], 00:04:42.516 "driver_specific": {} 00:04:42.516 } 00:04:42.516 ]' 00:04:42.516 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:42.516 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:42.516 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.516 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.516 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.776 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.776 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:42.776 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:42.776 18:41:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:42.776 00:04:42.776 real 0m0.144s 00:04:42.776 user 0m0.093s 00:04:42.776 sys 0m0.017s 00:04:42.776 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.776 18:41:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.776 ************************************ 00:04:42.776 END TEST rpc_plugins 00:04:42.776 ************************************ 00:04:42.776 18:41:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:42.776 18:41:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.776 18:41:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.776 18:41:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.776 ************************************ 00:04:42.776 START TEST rpc_trace_cmd_test 00:04:42.776 ************************************ 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:42.776 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2297340", 00:04:42.776 "tpoint_group_mask": "0x8", 00:04:42.776 "iscsi_conn": { 00:04:42.776 "mask": "0x2", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "scsi": { 00:04:42.776 "mask": "0x4", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "bdev": { 00:04:42.776 "mask": "0x8", 00:04:42.776 "tpoint_mask": "0xffffffffffffffff" 00:04:42.776 }, 00:04:42.776 "nvmf_rdma": { 00:04:42.776 "mask": "0x10", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "nvmf_tcp": { 00:04:42.776 "mask": "0x20", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "ftl": { 00:04:42.776 "mask": "0x40", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "blobfs": { 00:04:42.776 "mask": "0x80", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "dsa": { 00:04:42.776 "mask": "0x200", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "thread": { 00:04:42.776 "mask": "0x400", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "nvme_pcie": { 00:04:42.776 "mask": "0x800", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "iaa": { 00:04:42.776 "mask": "0x1000", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "nvme_tcp": { 00:04:42.776 "mask": "0x2000", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "bdev_nvme": { 00:04:42.776 "mask": "0x4000", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 }, 00:04:42.776 "sock": { 00:04:42.776 "mask": "0x8000", 00:04:42.776 "tpoint_mask": "0x0" 00:04:42.776 } 00:04:42.776 }' 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:42.776 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:43.035 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:43.035 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:43.035 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:43.035 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:43.035 18:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:43.035 00:04:43.035 real 0m0.243s 00:04:43.035 user 0m0.210s 00:04:43.035 sys 0m0.026s 00:04:43.035 18:41:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.035 18:41:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.035 ************************************ 00:04:43.035 END TEST rpc_trace_cmd_test 00:04:43.035 ************************************ 00:04:43.035 18:41:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:43.035 18:41:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:43.035 18:41:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:43.035 18:41:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.035 18:41:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.035 18:41:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.035 ************************************ 00:04:43.035 START TEST rpc_daemon_integrity 00:04:43.035 ************************************ 00:04:43.035 18:41:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:43.035 18:41:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.035 18:41:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.035 18:41:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.035 18:41:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.035 18:41:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.035 18:41:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.035 { 00:04:43.035 "name": "Malloc2", 00:04:43.035 "aliases": [ 00:04:43.035 "070d49b1-60a2-445f-945c-3d7da84a7f76" 00:04:43.035 ], 00:04:43.035 "product_name": "Malloc disk", 00:04:43.035 "block_size": 512, 00:04:43.035 "num_blocks": 16384, 00:04:43.035 "uuid": "070d49b1-60a2-445f-945c-3d7da84a7f76", 00:04:43.035 "assigned_rate_limits": { 00:04:43.035 "rw_ios_per_sec": 0, 00:04:43.035 "rw_mbytes_per_sec": 0, 00:04:43.035 "r_mbytes_per_sec": 0, 00:04:43.035 "w_mbytes_per_sec": 0 00:04:43.035 }, 00:04:43.035 "claimed": false, 00:04:43.035 "zoned": false, 00:04:43.035 "supported_io_types": { 00:04:43.035 "read": true, 00:04:43.035 "write": true, 00:04:43.035 "unmap": true, 00:04:43.035 "flush": true, 00:04:43.035 "reset": true, 00:04:43.035 "nvme_admin": false, 00:04:43.035 "nvme_io": false, 00:04:43.035 "nvme_io_md": false, 00:04:43.035 "write_zeroes": true, 00:04:43.035 "zcopy": true, 00:04:43.035 "get_zone_info": false, 00:04:43.035 "zone_management": false, 00:04:43.035 "zone_append": false, 00:04:43.035 "compare": false, 00:04:43.035 "compare_and_write": false, 00:04:43.035 "abort": true, 00:04:43.035 "seek_hole": false, 00:04:43.035 "seek_data": false, 00:04:43.035 "copy": true, 00:04:43.035 "nvme_iov_md": false 00:04:43.035 }, 00:04:43.035 "memory_domains": [ 00:04:43.035 { 00:04:43.035 "dma_device_id": "system", 00:04:43.035 "dma_device_type": 1 00:04:43.035 }, 00:04:43.035 { 00:04:43.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.035 "dma_device_type": 2 00:04:43.035 } 00:04:43.035 ], 00:04:43.035 "driver_specific": {} 00:04:43.035 } 00:04:43.035 ]' 00:04:43.035 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.294 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.294 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:43.294 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.294 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.294 [2024-07-24 18:41:28.089382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:43.294 [2024-07-24 18:41:28.089420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.294 [2024-07-24 18:41:28.089438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b4c70 00:04:43.294 [2024-07-24 18:41:28.089447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.294 [2024-07-24 18:41:28.090831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.294 [2024-07-24 18:41:28.090858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.294 Passthru0 00:04:43.294 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.294 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.294 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.294 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.294 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.294 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.294 { 00:04:43.294 "name": "Malloc2", 00:04:43.294 "aliases": [ 00:04:43.294 "070d49b1-60a2-445f-945c-3d7da84a7f76" 00:04:43.294 ], 00:04:43.294 "product_name": "Malloc disk", 00:04:43.294 "block_size": 512, 00:04:43.294 "num_blocks": 16384, 00:04:43.294 "uuid": "070d49b1-60a2-445f-945c-3d7da84a7f76", 00:04:43.294 "assigned_rate_limits": { 00:04:43.294 "rw_ios_per_sec": 0, 00:04:43.294 "rw_mbytes_per_sec": 0, 00:04:43.294 "r_mbytes_per_sec": 0, 00:04:43.294 "w_mbytes_per_sec": 0 00:04:43.294 }, 00:04:43.294 "claimed": true, 00:04:43.294 "claim_type": "exclusive_write", 00:04:43.294 "zoned": false, 00:04:43.294 "supported_io_types": { 00:04:43.294 "read": true, 00:04:43.294 "write": true, 00:04:43.294 "unmap": true, 00:04:43.294 "flush": true, 00:04:43.294 "reset": true, 00:04:43.294 "nvme_admin": false, 00:04:43.294 "nvme_io": false, 00:04:43.294 "nvme_io_md": false, 00:04:43.294 "write_zeroes": true, 00:04:43.294 "zcopy": true, 00:04:43.294 "get_zone_info": false, 00:04:43.294 "zone_management": false, 00:04:43.294 "zone_append": false, 00:04:43.295 "compare": false, 00:04:43.295 "compare_and_write": false, 00:04:43.295 "abort": true, 00:04:43.295 "seek_hole": false, 00:04:43.295 "seek_data": false, 00:04:43.295 "copy": true, 00:04:43.295 "nvme_iov_md": false 00:04:43.295 }, 00:04:43.295 "memory_domains": [ 00:04:43.295 { 00:04:43.295 "dma_device_id": "system", 00:04:43.295 "dma_device_type": 1 00:04:43.295 }, 00:04:43.295 { 00:04:43.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.295 "dma_device_type": 2 00:04:43.295 } 00:04:43.295 ], 00:04:43.295 "driver_specific": {} 00:04:43.295 }, 00:04:43.295 { 00:04:43.295 "name": "Passthru0", 00:04:43.295 "aliases": [ 00:04:43.295 "856c5669-a280-5f60-b109-1fc3d77e1387" 00:04:43.295 ], 00:04:43.295 "product_name": "passthru", 00:04:43.295 "block_size": 512, 00:04:43.295 "num_blocks": 16384, 00:04:43.295 "uuid": "856c5669-a280-5f60-b109-1fc3d77e1387", 00:04:43.295 "assigned_rate_limits": { 00:04:43.295 "rw_ios_per_sec": 0, 00:04:43.295 "rw_mbytes_per_sec": 0, 00:04:43.295 "r_mbytes_per_sec": 0, 00:04:43.295 "w_mbytes_per_sec": 0 00:04:43.295 }, 00:04:43.295 "claimed": false, 00:04:43.295 "zoned": false, 00:04:43.295 "supported_io_types": { 00:04:43.295 "read": true, 00:04:43.295 "write": true, 00:04:43.295 "unmap": true, 00:04:43.295 "flush": true, 00:04:43.295 "reset": true, 00:04:43.295 "nvme_admin": false, 00:04:43.295 "nvme_io": false, 00:04:43.295 "nvme_io_md": false, 00:04:43.295 "write_zeroes": true, 00:04:43.295 "zcopy": true, 00:04:43.295 "get_zone_info": false, 00:04:43.295 "zone_management": false, 00:04:43.295 "zone_append": false, 00:04:43.295 "compare": false, 00:04:43.295 "compare_and_write": false, 00:04:43.295 "abort": true, 00:04:43.295 "seek_hole": false, 00:04:43.295 "seek_data": false, 00:04:43.295 "copy": true, 00:04:43.295 "nvme_iov_md": false 00:04:43.295 }, 00:04:43.295 "memory_domains": [ 00:04:43.295 { 00:04:43.295 "dma_device_id": "system", 00:04:43.295 "dma_device_type": 1 00:04:43.295 }, 00:04:43.295 { 00:04:43.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.295 "dma_device_type": 2 00:04:43.295 } 00:04:43.295 ], 00:04:43.295 "driver_specific": { 00:04:43.295 "passthru": { 00:04:43.295 "name": "Passthru0", 00:04:43.295 "base_bdev_name": "Malloc2" 00:04:43.295 } 00:04:43.295 } 00:04:43.295 } 00:04:43.295 ]' 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.295 00:04:43.295 real 0m0.291s 00:04:43.295 user 0m0.186s 00:04:43.295 sys 0m0.041s 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.295 18:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.295 ************************************ 00:04:43.295 END TEST rpc_daemon_integrity 00:04:43.295 ************************************ 00:04:43.295 18:41:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:43.295 18:41:28 rpc -- rpc/rpc.sh@84 -- # killprocess 2297340 00:04:43.295 18:41:28 rpc -- common/autotest_common.sh@948 -- # '[' -z 2297340 ']' 00:04:43.295 18:41:28 rpc -- common/autotest_common.sh@952 -- # kill -0 2297340 00:04:43.295 18:41:28 rpc -- common/autotest_common.sh@953 -- # uname 00:04:43.295 18:41:28 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.295 18:41:28 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2297340 00:04:43.554 18:41:28 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.554 18:41:28 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.554 18:41:28 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2297340' 00:04:43.554 killing process with pid 2297340 00:04:43.554 18:41:28 rpc -- common/autotest_common.sh@967 -- # kill 2297340 00:04:43.554 18:41:28 rpc -- common/autotest_common.sh@972 -- # wait 2297340 00:04:43.813 00:04:43.813 real 0m2.165s 00:04:43.813 user 0m2.812s 00:04:43.813 sys 0m0.724s 00:04:43.813 18:41:28 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.813 18:41:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.813 ************************************ 00:04:43.813 END TEST rpc 00:04:43.813 ************************************ 00:04:43.813 18:41:28 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:43.813 18:41:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.813 18:41:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.813 18:41:28 -- common/autotest_common.sh@10 -- # set +x 00:04:43.813 ************************************ 00:04:43.813 START TEST skip_rpc 00:04:43.813 ************************************ 00:04:43.813 18:41:28 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:43.813 * Looking for test storage... 00:04:43.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.813 18:41:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.813 18:41:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:43.813 18:41:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:43.813 18:41:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.813 18:41:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.813 18:41:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.073 ************************************ 00:04:44.073 START TEST skip_rpc 00:04:44.073 ************************************ 00:04:44.073 18:41:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:44.073 18:41:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2298036 00:04:44.073 18:41:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.073 18:41:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:44.073 18:41:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:44.073 [2024-07-24 18:41:28.906532] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:04:44.073 [2024-07-24 18:41:28.906589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298036 ] 00:04:44.073 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.073 [2024-07-24 18:41:28.989985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.073 [2024-07-24 18:41:29.078187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2298036 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2298036 ']' 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2298036 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2298036 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2298036' 00:04:49.353 killing process with pid 2298036 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2298036 00:04:49.353 18:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2298036 00:04:49.353 00:04:49.353 real 0m5.396s 00:04:49.353 user 0m5.129s 00:04:49.353 sys 0m0.294s 00:04:49.353 18:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.354 18:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.354 ************************************ 00:04:49.354 END TEST skip_rpc 00:04:49.354 ************************************ 00:04:49.354 18:41:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:49.354 18:41:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.354 18:41:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.354 18:41:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.354 ************************************ 00:04:49.354 START TEST skip_rpc_with_json 00:04:49.354 ************************************ 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2299067 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2299067 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2299067 ']' 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.354 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.354 [2024-07-24 18:41:34.357743] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:04:49.354 [2024-07-24 18:41:34.357795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299067 ] 00:04:49.613 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.613 [2024-07-24 18:41:34.439243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.613 [2024-07-24 18:41:34.530154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.872 [2024-07-24 18:41:34.753814] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:49.872 request: 00:04:49.872 { 00:04:49.872 "trtype": "tcp", 00:04:49.872 "method": "nvmf_get_transports", 00:04:49.872 "req_id": 1 00:04:49.872 } 00:04:49.872 Got JSON-RPC error response 00:04:49.872 response: 00:04:49.872 { 00:04:49.872 "code": -19, 00:04:49.872 "message": "No such device" 00:04:49.872 } 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.872 [2024-07-24 18:41:34.765951] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.872 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.131 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.131 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.131 { 00:04:50.131 "subsystems": [ 00:04:50.131 { 00:04:50.131 "subsystem": "vfio_user_target", 00:04:50.131 "config": null 00:04:50.131 }, 00:04:50.131 { 00:04:50.131 "subsystem": "keyring", 00:04:50.131 "config": [] 00:04:50.131 }, 00:04:50.131 { 00:04:50.131 "subsystem": "iobuf", 00:04:50.131 "config": [ 00:04:50.131 { 00:04:50.131 "method": "iobuf_set_options", 00:04:50.131 "params": { 00:04:50.131 "small_pool_count": 8192, 00:04:50.131 "large_pool_count": 1024, 00:04:50.131 "small_bufsize": 8192, 00:04:50.131 "large_bufsize": 135168 00:04:50.131 } 00:04:50.131 } 00:04:50.131 ] 00:04:50.131 }, 00:04:50.131 { 00:04:50.131 "subsystem": "sock", 00:04:50.131 "config": [ 00:04:50.131 { 00:04:50.131 "method": "sock_set_default_impl", 00:04:50.131 "params": { 00:04:50.131 "impl_name": "posix" 00:04:50.131 } 00:04:50.131 }, 00:04:50.131 { 00:04:50.131 "method": "sock_impl_set_options", 00:04:50.131 "params": { 00:04:50.131 "impl_name": "ssl", 00:04:50.131 "recv_buf_size": 4096, 00:04:50.131 "send_buf_size": 4096, 00:04:50.131 "enable_recv_pipe": true, 00:04:50.131 "enable_quickack": false, 00:04:50.131 "enable_placement_id": 0, 00:04:50.131 "enable_zerocopy_send_server": true, 00:04:50.131 "enable_zerocopy_send_client": false, 00:04:50.131 "zerocopy_threshold": 0, 00:04:50.131 "tls_version": 0, 00:04:50.131 "enable_ktls": false 00:04:50.131 } 00:04:50.131 }, 00:04:50.131 { 00:04:50.131 "method": "sock_impl_set_options", 00:04:50.131 "params": { 00:04:50.131 "impl_name": "posix", 00:04:50.131 "recv_buf_size": 2097152, 00:04:50.131 "send_buf_size": 2097152, 00:04:50.131 "enable_recv_pipe": true, 00:04:50.131 "enable_quickack": false, 00:04:50.131 "enable_placement_id": 0, 00:04:50.131 "enable_zerocopy_send_server": true, 00:04:50.131 "enable_zerocopy_send_client": false, 00:04:50.131 "zerocopy_threshold": 0, 00:04:50.131 "tls_version": 0, 00:04:50.131 "enable_ktls": false 00:04:50.131 } 00:04:50.131 } 00:04:50.131 ] 00:04:50.131 }, 00:04:50.131 { 00:04:50.131 "subsystem": "vmd", 00:04:50.131 "config": [] 00:04:50.131 }, 00:04:50.131 { 00:04:50.131 "subsystem": "accel", 00:04:50.131 "config": [ 00:04:50.131 { 00:04:50.131 "method": "accel_set_options", 00:04:50.131 "params": { 00:04:50.131 "small_cache_size": 128, 00:04:50.131 "large_cache_size": 16, 00:04:50.131 "task_count": 2048, 00:04:50.131 "sequence_count": 2048, 00:04:50.131 "buf_count": 2048 00:04:50.131 } 00:04:50.131 } 00:04:50.131 ] 00:04:50.131 }, 00:04:50.131 { 00:04:50.131 "subsystem": "bdev", 00:04:50.131 "config": [ 00:04:50.131 { 00:04:50.132 "method": "bdev_set_options", 00:04:50.132 "params": { 00:04:50.132 "bdev_io_pool_size": 65535, 00:04:50.132 "bdev_io_cache_size": 256, 00:04:50.132 "bdev_auto_examine": true, 00:04:50.132 "iobuf_small_cache_size": 128, 00:04:50.132 "iobuf_large_cache_size": 16 00:04:50.132 } 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "method": "bdev_raid_set_options", 00:04:50.132 "params": { 00:04:50.132 "process_window_size_kb": 1024, 00:04:50.132 "process_max_bandwidth_mb_sec": 0 00:04:50.132 } 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "method": "bdev_iscsi_set_options", 00:04:50.132 "params": { 00:04:50.132 "timeout_sec": 30 00:04:50.132 } 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "method": "bdev_nvme_set_options", 00:04:50.132 "params": { 00:04:50.132 "action_on_timeout": "none", 00:04:50.132 "timeout_us": 0, 00:04:50.132 "timeout_admin_us": 0, 00:04:50.132 "keep_alive_timeout_ms": 10000, 00:04:50.132 "arbitration_burst": 0, 00:04:50.132 "low_priority_weight": 0, 00:04:50.132 "medium_priority_weight": 0, 00:04:50.132 "high_priority_weight": 0, 00:04:50.132 "nvme_adminq_poll_period_us": 10000, 00:04:50.132 "nvme_ioq_poll_period_us": 0, 00:04:50.132 "io_queue_requests": 0, 00:04:50.132 "delay_cmd_submit": true, 00:04:50.132 "transport_retry_count": 4, 00:04:50.132 "bdev_retry_count": 3, 00:04:50.132 "transport_ack_timeout": 0, 00:04:50.132 "ctrlr_loss_timeout_sec": 0, 00:04:50.132 "reconnect_delay_sec": 0, 00:04:50.132 "fast_io_fail_timeout_sec": 0, 00:04:50.132 "disable_auto_failback": false, 00:04:50.132 "generate_uuids": false, 00:04:50.132 "transport_tos": 0, 00:04:50.132 "nvme_error_stat": false, 00:04:50.132 "rdma_srq_size": 0, 00:04:50.132 "io_path_stat": false, 00:04:50.132 "allow_accel_sequence": false, 00:04:50.132 "rdma_max_cq_size": 0, 00:04:50.132 "rdma_cm_event_timeout_ms": 0, 00:04:50.132 "dhchap_digests": [ 00:04:50.132 "sha256", 00:04:50.132 "sha384", 00:04:50.132 "sha512" 00:04:50.132 ], 00:04:50.132 "dhchap_dhgroups": [ 00:04:50.132 "null", 00:04:50.132 "ffdhe2048", 00:04:50.132 "ffdhe3072", 00:04:50.132 "ffdhe4096", 00:04:50.132 "ffdhe6144", 00:04:50.132 "ffdhe8192" 00:04:50.132 ] 00:04:50.132 } 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "method": "bdev_nvme_set_hotplug", 00:04:50.132 "params": { 00:04:50.132 "period_us": 100000, 00:04:50.132 "enable": false 00:04:50.132 } 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "method": "bdev_wait_for_examine" 00:04:50.132 } 00:04:50.132 ] 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "subsystem": "scsi", 00:04:50.132 "config": null 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "subsystem": "scheduler", 00:04:50.132 "config": [ 00:04:50.132 { 00:04:50.132 "method": "framework_set_scheduler", 00:04:50.132 "params": { 00:04:50.132 "name": "static" 00:04:50.132 } 00:04:50.132 } 00:04:50.132 ] 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "subsystem": "vhost_scsi", 00:04:50.132 "config": [] 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "subsystem": "vhost_blk", 00:04:50.132 "config": [] 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "subsystem": "ublk", 00:04:50.132 "config": [] 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "subsystem": "nbd", 00:04:50.132 "config": [] 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "subsystem": "nvmf", 00:04:50.132 "config": [ 00:04:50.132 { 00:04:50.132 "method": "nvmf_set_config", 00:04:50.132 "params": { 00:04:50.132 "discovery_filter": "match_any", 00:04:50.132 "admin_cmd_passthru": { 00:04:50.132 "identify_ctrlr": false 00:04:50.132 } 00:04:50.132 } 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "method": "nvmf_set_max_subsystems", 00:04:50.132 "params": { 00:04:50.132 "max_subsystems": 1024 00:04:50.132 } 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "method": "nvmf_set_crdt", 00:04:50.132 "params": { 00:04:50.132 "crdt1": 0, 00:04:50.132 "crdt2": 0, 00:04:50.132 "crdt3": 0 00:04:50.132 } 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "method": "nvmf_create_transport", 00:04:50.132 "params": { 00:04:50.132 "trtype": "TCP", 00:04:50.132 "max_queue_depth": 128, 00:04:50.132 "max_io_qpairs_per_ctrlr": 127, 00:04:50.132 "in_capsule_data_size": 4096, 00:04:50.132 "max_io_size": 131072, 00:04:50.132 "io_unit_size": 131072, 00:04:50.132 "max_aq_depth": 128, 00:04:50.132 "num_shared_buffers": 511, 00:04:50.132 "buf_cache_size": 4294967295, 00:04:50.132 "dif_insert_or_strip": false, 00:04:50.132 "zcopy": false, 00:04:50.132 "c2h_success": true, 00:04:50.132 "sock_priority": 0, 00:04:50.132 "abort_timeout_sec": 1, 00:04:50.132 "ack_timeout": 0, 00:04:50.132 "data_wr_pool_size": 0 00:04:50.132 } 00:04:50.132 } 00:04:50.132 ] 00:04:50.132 }, 00:04:50.132 { 00:04:50.132 "subsystem": "iscsi", 00:04:50.132 "config": [ 00:04:50.132 { 00:04:50.132 "method": "iscsi_set_options", 00:04:50.132 "params": { 00:04:50.132 "node_base": "iqn.2016-06.io.spdk", 00:04:50.132 "max_sessions": 128, 00:04:50.132 "max_connections_per_session": 2, 00:04:50.132 "max_queue_depth": 64, 00:04:50.132 "default_time2wait": 2, 00:04:50.132 "default_time2retain": 20, 00:04:50.132 "first_burst_length": 8192, 00:04:50.132 "immediate_data": true, 00:04:50.132 "allow_duplicated_isid": false, 00:04:50.132 "error_recovery_level": 0, 00:04:50.132 "nop_timeout": 60, 00:04:50.132 "nop_in_interval": 30, 00:04:50.132 "disable_chap": false, 00:04:50.132 "require_chap": false, 00:04:50.132 "mutual_chap": false, 00:04:50.132 "chap_group": 0, 00:04:50.132 "max_large_datain_per_connection": 64, 00:04:50.132 "max_r2t_per_connection": 4, 00:04:50.132 "pdu_pool_size": 36864, 00:04:50.132 "immediate_data_pool_size": 16384, 00:04:50.132 "data_out_pool_size": 2048 00:04:50.132 } 00:04:50.132 } 00:04:50.132 ] 00:04:50.132 } 00:04:50.132 ] 00:04:50.132 } 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2299067 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2299067 ']' 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2299067 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2299067 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2299067' 00:04:50.132 killing process with pid 2299067 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2299067 00:04:50.132 18:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2299067 00:04:50.392 18:41:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2299129 00:04:50.392 18:41:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:50.392 18:41:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2299129 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2299129 ']' 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2299129 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2299129 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2299129' 00:04:55.666 killing process with pid 2299129 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2299129 00:04:55.666 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2299129 00:04:55.925 18:41:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:55.925 18:41:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:55.925 00:04:55.925 real 0m6.407s 00:04:55.925 user 0m6.126s 00:04:55.925 sys 0m0.619s 00:04:55.925 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.925 18:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.925 ************************************ 00:04:55.925 END TEST skip_rpc_with_json 00:04:55.925 ************************************ 00:04:55.925 18:41:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:55.925 18:41:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.925 18:41:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.925 18:41:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.926 ************************************ 00:04:55.926 START TEST skip_rpc_with_delay 00:04:55.926 ************************************ 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.926 [2024-07-24 18:41:40.842758] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:55.926 [2024-07-24 18:41:40.842837] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:55.926 00:04:55.926 real 0m0.078s 00:04:55.926 user 0m0.051s 00:04:55.926 sys 0m0.027s 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.926 18:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:55.926 ************************************ 00:04:55.926 END TEST skip_rpc_with_delay 00:04:55.926 ************************************ 00:04:55.926 18:41:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:55.926 18:41:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:55.926 18:41:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:55.926 18:41:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.926 18:41:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.926 18:41:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.926 ************************************ 00:04:55.926 START TEST exit_on_failed_rpc_init 00:04:55.926 ************************************ 00:04:55.926 18:41:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:55.926 18:41:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2300224 00:04:55.926 18:41:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2300224 00:04:55.926 18:41:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.926 18:41:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2300224 ']' 00:04:55.926 18:41:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.926 18:41:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.926 18:41:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.926 18:41:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.926 18:41:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.184 [2024-07-24 18:41:40.987742] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:04:56.184 [2024-07-24 18:41:40.987800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300224 ] 00:04:56.184 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.184 [2024-07-24 18:41:41.069611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.184 [2024-07-24 18:41:41.159676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.118 18:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.118 [2024-07-24 18:41:41.919552] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:04:57.118 [2024-07-24 18:41:41.919621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300489 ] 00:04:57.118 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.118 [2024-07-24 18:41:41.999494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.118 [2024-07-24 18:41:42.100824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.118 [2024-07-24 18:41:42.100912] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:57.118 [2024-07-24 18:41:42.100929] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:57.118 [2024-07-24 18:41:42.100940] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2300224 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2300224 ']' 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2300224 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2300224 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2300224' 00:04:57.377 killing process with pid 2300224 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2300224 00:04:57.377 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2300224 00:04:57.636 00:04:57.636 real 0m1.659s 00:04:57.636 user 0m1.976s 00:04:57.636 sys 0m0.468s 00:04:57.636 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.636 18:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.636 ************************************ 00:04:57.636 END TEST exit_on_failed_rpc_init 00:04:57.636 ************************************ 00:04:57.636 18:41:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.636 00:04:57.636 real 0m13.896s 00:04:57.636 user 0m13.426s 00:04:57.636 sys 0m1.650s 00:04:57.636 18:41:42 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.636 18:41:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.636 ************************************ 00:04:57.636 END TEST skip_rpc 00:04:57.636 ************************************ 00:04:57.896 18:41:42 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.896 18:41:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.896 18:41:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.896 18:41:42 -- common/autotest_common.sh@10 -- # set +x 00:04:57.896 ************************************ 00:04:57.896 START TEST rpc_client 00:04:57.896 ************************************ 00:04:57.896 18:41:42 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.896 * Looking for test storage... 00:04:57.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:57.896 18:41:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:57.896 OK 00:04:57.896 18:41:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:57.896 00:04:57.896 real 0m0.118s 00:04:57.896 user 0m0.049s 00:04:57.896 sys 0m0.078s 00:04:57.896 18:41:42 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.896 18:41:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:57.896 ************************************ 00:04:57.896 END TEST rpc_client 00:04:57.896 ************************************ 00:04:57.896 18:41:42 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.896 18:41:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.896 18:41:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.896 18:41:42 -- common/autotest_common.sh@10 -- # set +x 00:04:57.896 ************************************ 00:04:57.896 START TEST json_config 00:04:57.896 ************************************ 00:04:57.896 18:41:42 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.155 18:41:42 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.155 18:41:42 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.155 18:41:42 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.155 18:41:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.155 18:41:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.155 18:41:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.155 18:41:42 json_config -- paths/export.sh@5 -- # export PATH 00:04:58.155 18:41:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@47 -- # : 0 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:58.155 18:41:42 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:58.155 INFO: JSON configuration test init 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:58.155 18:41:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.155 18:41:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:58.155 18:41:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.155 18:41:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.155 18:41:42 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:58.155 18:41:42 json_config -- json_config/common.sh@9 -- # local app=target 00:04:58.155 18:41:42 json_config -- json_config/common.sh@10 -- # shift 00:04:58.155 18:41:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.155 18:41:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.155 18:41:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.155 18:41:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.155 18:41:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.156 18:41:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2300824 00:04:58.156 18:41:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.156 Waiting for target to run... 00:04:58.156 18:41:42 json_config -- json_config/common.sh@25 -- # waitforlisten 2300824 /var/tmp/spdk_tgt.sock 00:04:58.156 18:41:42 json_config -- common/autotest_common.sh@829 -- # '[' -z 2300824 ']' 00:04:58.156 18:41:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:58.156 18:41:42 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.156 18:41:42 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.156 18:41:42 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.156 18:41:42 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.156 18:41:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.156 [2024-07-24 18:41:43.046143] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:04:58.156 [2024-07-24 18:41:43.046209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300824 ] 00:04:58.156 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.415 [2024-07-24 18:41:43.348872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.673 [2024-07-24 18:41:43.430251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.240 18:41:44 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.240 18:41:44 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:59.240 18:41:44 json_config -- json_config/common.sh@26 -- # echo '' 00:04:59.240 00:04:59.240 18:41:44 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:59.240 18:41:44 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:59.240 18:41:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.240 18:41:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.240 18:41:44 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:59.240 18:41:44 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:59.240 18:41:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.240 18:41:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.240 18:41:44 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:59.240 18:41:44 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:59.240 18:41:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:02.530 18:41:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.530 18:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:02.530 18:41:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@51 -- # sort 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:02.530 18:41:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.530 18:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:02.530 18:41:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.530 18:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:02.530 18:41:47 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:02.530 18:41:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:02.789 MallocForNvmf0 00:05:02.789 18:41:47 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:02.789 18:41:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.047 MallocForNvmf1 00:05:03.048 18:41:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.048 18:41:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.306 [2024-07-24 18:41:48.202258] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.306 18:41:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.306 18:41:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.565 18:41:48 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.565 18:41:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.824 18:41:48 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:03.824 18:41:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.083 18:41:48 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.083 18:41:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.341 [2024-07-24 18:41:49.197440] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.341 18:41:49 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:04.341 18:41:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.341 18:41:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.341 18:41:49 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:04.341 18:41:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.341 18:41:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.341 18:41:49 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:04.341 18:41:49 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.341 18:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.620 MallocBdevForConfigChangeCheck 00:05:04.620 18:41:49 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:04.620 18:41:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.620 18:41:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.620 18:41:49 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:04.620 18:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.204 18:41:49 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:05.204 INFO: shutting down applications... 00:05:05.204 18:41:49 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:05.204 18:41:49 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:05.204 18:41:49 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:05.204 18:41:49 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:07.109 Calling clear_iscsi_subsystem 00:05:07.109 Calling clear_nvmf_subsystem 00:05:07.109 Calling clear_nbd_subsystem 00:05:07.109 Calling clear_ublk_subsystem 00:05:07.109 Calling clear_vhost_blk_subsystem 00:05:07.109 Calling clear_vhost_scsi_subsystem 00:05:07.109 Calling clear_bdev_subsystem 00:05:07.110 18:41:51 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:07.110 18:41:51 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:07.110 18:41:51 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:07.110 18:41:51 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.110 18:41:51 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:07.110 18:41:51 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:07.110 18:41:52 json_config -- json_config/json_config.sh@349 -- # break 00:05:07.110 18:41:52 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:07.110 18:41:52 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:07.110 18:41:52 json_config -- json_config/common.sh@31 -- # local app=target 00:05:07.110 18:41:52 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:07.110 18:41:52 json_config -- json_config/common.sh@35 -- # [[ -n 2300824 ]] 00:05:07.110 18:41:52 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2300824 00:05:07.110 18:41:52 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:07.110 18:41:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.110 18:41:52 json_config -- json_config/common.sh@41 -- # kill -0 2300824 00:05:07.110 18:41:52 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.678 18:41:52 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.678 18:41:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.678 18:41:52 json_config -- json_config/common.sh@41 -- # kill -0 2300824 00:05:07.678 18:41:52 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.678 18:41:52 json_config -- json_config/common.sh@43 -- # break 00:05:07.678 18:41:52 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.678 18:41:52 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.678 SPDK target shutdown done 00:05:07.678 18:41:52 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:07.678 INFO: relaunching applications... 00:05:07.678 18:41:52 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.678 18:41:52 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.678 18:41:52 json_config -- json_config/common.sh@10 -- # shift 00:05:07.678 18:41:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.678 18:41:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.678 18:41:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.678 18:41:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.678 18:41:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.678 18:41:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2302592 00:05:07.678 18:41:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.678 Waiting for target to run... 00:05:07.678 18:41:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.678 18:41:52 json_config -- json_config/common.sh@25 -- # waitforlisten 2302592 /var/tmp/spdk_tgt.sock 00:05:07.678 18:41:52 json_config -- common/autotest_common.sh@829 -- # '[' -z 2302592 ']' 00:05:07.678 18:41:52 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.678 18:41:52 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.678 18:41:52 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.678 18:41:52 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.678 18:41:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.678 [2024-07-24 18:41:52.589761] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:07.678 [2024-07-24 18:41:52.589832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302592 ] 00:05:07.678 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.245 [2024-07-24 18:41:53.053968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.245 [2024-07-24 18:41:53.153611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.532 [2024-07-24 18:41:56.201158] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.532 [2024-07-24 18:41:56.233508] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.532 18:41:56 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.532 18:41:56 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:11.532 18:41:56 json_config -- json_config/common.sh@26 -- # echo '' 00:05:11.532 00:05:11.532 18:41:56 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:11.532 18:41:56 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:11.532 INFO: Checking if target configuration is the same... 00:05:11.532 18:41:56 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.532 18:41:56 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:11.532 18:41:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.532 + '[' 2 -ne 2 ']' 00:05:11.532 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:11.532 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:11.532 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.532 +++ basename /dev/fd/62 00:05:11.532 ++ mktemp /tmp/62.XXX 00:05:11.532 + tmp_file_1=/tmp/62.AQA 00:05:11.532 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.532 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.532 + tmp_file_2=/tmp/spdk_tgt_config.json.4aC 00:05:11.532 + ret=0 00:05:11.532 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.791 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.791 + diff -u /tmp/62.AQA /tmp/spdk_tgt_config.json.4aC 00:05:11.791 + echo 'INFO: JSON config files are the same' 00:05:11.791 INFO: JSON config files are the same 00:05:11.791 + rm /tmp/62.AQA /tmp/spdk_tgt_config.json.4aC 00:05:11.791 + exit 0 00:05:11.791 18:41:56 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:11.791 18:41:56 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:11.791 INFO: changing configuration and checking if this can be detected... 00:05:11.791 18:41:56 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.791 18:41:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.050 18:41:56 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:12.050 18:41:56 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.050 18:41:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.050 + '[' 2 -ne 2 ']' 00:05:12.050 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:12.050 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:12.050 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.050 +++ basename /dev/fd/62 00:05:12.050 ++ mktemp /tmp/62.XXX 00:05:12.050 + tmp_file_1=/tmp/62.6UO 00:05:12.050 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.050 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:12.050 + tmp_file_2=/tmp/spdk_tgt_config.json.HWL 00:05:12.050 + ret=0 00:05:12.050 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.619 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.619 + diff -u /tmp/62.6UO /tmp/spdk_tgt_config.json.HWL 00:05:12.619 + ret=1 00:05:12.619 + echo '=== Start of file: /tmp/62.6UO ===' 00:05:12.619 + cat /tmp/62.6UO 00:05:12.619 + echo '=== End of file: /tmp/62.6UO ===' 00:05:12.619 + echo '' 00:05:12.619 + echo '=== Start of file: /tmp/spdk_tgt_config.json.HWL ===' 00:05:12.619 + cat /tmp/spdk_tgt_config.json.HWL 00:05:12.619 + echo '=== End of file: /tmp/spdk_tgt_config.json.HWL ===' 00:05:12.619 + echo '' 00:05:12.619 + rm /tmp/62.6UO /tmp/spdk_tgt_config.json.HWL 00:05:12.619 + exit 1 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:12.619 INFO: configuration change detected. 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@321 -- # [[ -n 2302592 ]] 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.619 18:41:57 json_config -- json_config/json_config.sh@327 -- # killprocess 2302592 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@948 -- # '[' -z 2302592 ']' 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@952 -- # kill -0 2302592 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@953 -- # uname 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2302592 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2302592' 00:05:12.619 killing process with pid 2302592 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@967 -- # kill 2302592 00:05:12.619 18:41:57 json_config -- common/autotest_common.sh@972 -- # wait 2302592 00:05:14.524 18:41:59 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.524 18:41:59 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:14.524 18:41:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.524 18:41:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.524 18:41:59 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:14.524 18:41:59 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:14.524 INFO: Success 00:05:14.524 00:05:14.524 real 0m16.272s 00:05:14.524 user 0m18.293s 00:05:14.524 sys 0m2.095s 00:05:14.524 18:41:59 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.524 18:41:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.524 ************************************ 00:05:14.524 END TEST json_config 00:05:14.524 ************************************ 00:05:14.524 18:41:59 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.524 18:41:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.524 18:41:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.524 18:41:59 -- common/autotest_common.sh@10 -- # set +x 00:05:14.524 ************************************ 00:05:14.524 START TEST json_config_extra_key 00:05:14.524 ************************************ 00:05:14.524 18:41:59 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.524 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:14.524 18:41:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:14.525 18:41:59 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.525 18:41:59 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.525 18:41:59 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.525 18:41:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.525 18:41:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.525 18:41:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.525 18:41:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:14.525 18:41:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:14.525 18:41:59 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:14.525 INFO: launching applications... 00:05:14.525 18:41:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2304018 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.525 Waiting for target to run... 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2304018 /var/tmp/spdk_tgt.sock 00:05:14.525 18:41:59 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2304018 ']' 00:05:14.525 18:41:59 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:14.525 18:41:59 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.525 18:41:59 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.525 18:41:59 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.525 18:41:59 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.525 18:41:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.525 [2024-07-24 18:41:59.369682] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:14.525 [2024-07-24 18:41:59.369739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304018 ] 00:05:14.525 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.094 [2024-07-24 18:41:59.823304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.094 [2024-07-24 18:41:59.930076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.353 18:42:00 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.353 18:42:00 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:15.353 18:42:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:15.353 00:05:15.353 18:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:15.353 INFO: shutting down applications... 00:05:15.353 18:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:15.353 18:42:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:15.353 18:42:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.353 18:42:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2304018 ]] 00:05:15.353 18:42:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2304018 00:05:15.353 18:42:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.353 18:42:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.353 18:42:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2304018 00:05:15.353 18:42:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.922 18:42:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.922 18:42:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.922 18:42:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2304018 00:05:15.922 18:42:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.922 18:42:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:15.922 18:42:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.922 18:42:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.922 SPDK target shutdown done 00:05:15.922 18:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:15.922 Success 00:05:15.922 00:05:15.922 real 0m1.600s 00:05:15.922 user 0m1.352s 00:05:15.922 sys 0m0.574s 00:05:15.922 18:42:00 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.922 18:42:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.922 ************************************ 00:05:15.922 END TEST json_config_extra_key 00:05:15.922 ************************************ 00:05:15.922 18:42:00 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.922 18:42:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.922 18:42:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.922 18:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:15.922 ************************************ 00:05:15.922 START TEST alias_rpc 00:05:15.922 ************************************ 00:05:15.922 18:42:00 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.182 * Looking for test storage... 00:05:16.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:16.182 18:42:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.182 18:42:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2304384 00:05:16.182 18:42:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2304384 00:05:16.182 18:42:00 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2304384 ']' 00:05:16.182 18:42:00 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.182 18:42:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.182 18:42:00 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.182 18:42:00 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.182 18:42:00 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.182 18:42:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.182 [2024-07-24 18:42:01.028634] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:16.182 [2024-07-24 18:42:01.028694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304384 ] 00:05:16.182 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.182 [2024-07-24 18:42:01.108734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.441 [2024-07-24 18:42:01.202073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.379 18:42:02 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.379 18:42:02 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:17.379 18:42:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:17.948 18:42:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2304384 00:05:17.948 18:42:02 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2304384 ']' 00:05:17.948 18:42:02 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2304384 00:05:17.948 18:42:02 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:17.948 18:42:02 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.948 18:42:02 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2304384 00:05:17.948 18:42:02 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.948 18:42:02 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.948 18:42:02 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2304384' 00:05:17.948 killing process with pid 2304384 00:05:17.948 18:42:02 alias_rpc -- common/autotest_common.sh@967 -- # kill 2304384 00:05:17.948 18:42:02 alias_rpc -- common/autotest_common.sh@972 -- # wait 2304384 00:05:18.207 00:05:18.207 real 0m2.231s 00:05:18.207 user 0m2.994s 00:05:18.207 sys 0m0.486s 00:05:18.207 18:42:03 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.207 18:42:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.207 ************************************ 00:05:18.207 END TEST alias_rpc 00:05:18.207 ************************************ 00:05:18.207 18:42:03 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:18.207 18:42:03 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.207 18:42:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.207 18:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.207 18:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:18.207 ************************************ 00:05:18.207 START TEST spdkcli_tcp 00:05:18.207 ************************************ 00:05:18.207 18:42:03 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.466 * Looking for test storage... 00:05:18.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:18.466 18:42:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:18.466 18:42:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:18.466 18:42:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:18.466 18:42:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:18.466 18:42:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:18.466 18:42:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:18.466 18:42:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:18.466 18:42:03 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.466 18:42:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.466 18:42:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2305025 00:05:18.466 18:42:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2305025 00:05:18.466 18:42:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:18.466 18:42:03 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2305025 ']' 00:05:18.466 18:42:03 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.466 18:42:03 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.466 18:42:03 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.466 18:42:03 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.466 18:42:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.466 [2024-07-24 18:42:03.345139] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:18.466 [2024-07-24 18:42:03.345198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305025 ] 00:05:18.466 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.466 [2024-07-24 18:42:03.427589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.725 [2024-07-24 18:42:03.517493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.725 [2024-07-24 18:42:03.517498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.293 18:42:04 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.293 18:42:04 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:19.293 18:42:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2305048 00:05:19.293 18:42:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:19.293 18:42:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:19.563 [ 00:05:19.563 "bdev_malloc_delete", 00:05:19.563 "bdev_malloc_create", 00:05:19.563 "bdev_null_resize", 00:05:19.563 "bdev_null_delete", 00:05:19.563 "bdev_null_create", 00:05:19.563 "bdev_nvme_cuse_unregister", 00:05:19.563 "bdev_nvme_cuse_register", 00:05:19.563 "bdev_opal_new_user", 00:05:19.563 "bdev_opal_set_lock_state", 00:05:19.563 "bdev_opal_delete", 00:05:19.563 "bdev_opal_get_info", 00:05:19.563 "bdev_opal_create", 00:05:19.563 "bdev_nvme_opal_revert", 00:05:19.563 "bdev_nvme_opal_init", 00:05:19.563 "bdev_nvme_send_cmd", 00:05:19.563 "bdev_nvme_get_path_iostat", 00:05:19.563 "bdev_nvme_get_mdns_discovery_info", 00:05:19.563 "bdev_nvme_stop_mdns_discovery", 00:05:19.563 "bdev_nvme_start_mdns_discovery", 00:05:19.563 "bdev_nvme_set_multipath_policy", 00:05:19.563 "bdev_nvme_set_preferred_path", 00:05:19.563 "bdev_nvme_get_io_paths", 00:05:19.563 "bdev_nvme_remove_error_injection", 00:05:19.563 "bdev_nvme_add_error_injection", 00:05:19.563 "bdev_nvme_get_discovery_info", 00:05:19.563 "bdev_nvme_stop_discovery", 00:05:19.563 "bdev_nvme_start_discovery", 00:05:19.563 "bdev_nvme_get_controller_health_info", 00:05:19.563 "bdev_nvme_disable_controller", 00:05:19.563 "bdev_nvme_enable_controller", 00:05:19.563 "bdev_nvme_reset_controller", 00:05:19.563 "bdev_nvme_get_transport_statistics", 00:05:19.563 "bdev_nvme_apply_firmware", 00:05:19.563 "bdev_nvme_detach_controller", 00:05:19.563 "bdev_nvme_get_controllers", 00:05:19.563 "bdev_nvme_attach_controller", 00:05:19.563 "bdev_nvme_set_hotplug", 00:05:19.563 "bdev_nvme_set_options", 00:05:19.563 "bdev_passthru_delete", 00:05:19.563 "bdev_passthru_create", 00:05:19.563 "bdev_lvol_set_parent_bdev", 00:05:19.563 "bdev_lvol_set_parent", 00:05:19.563 "bdev_lvol_check_shallow_copy", 00:05:19.563 "bdev_lvol_start_shallow_copy", 00:05:19.563 "bdev_lvol_grow_lvstore", 00:05:19.563 "bdev_lvol_get_lvols", 00:05:19.563 "bdev_lvol_get_lvstores", 00:05:19.563 "bdev_lvol_delete", 00:05:19.563 "bdev_lvol_set_read_only", 00:05:19.563 "bdev_lvol_resize", 00:05:19.563 "bdev_lvol_decouple_parent", 00:05:19.563 "bdev_lvol_inflate", 00:05:19.563 "bdev_lvol_rename", 00:05:19.563 "bdev_lvol_clone_bdev", 00:05:19.563 "bdev_lvol_clone", 00:05:19.563 "bdev_lvol_snapshot", 00:05:19.563 "bdev_lvol_create", 00:05:19.563 "bdev_lvol_delete_lvstore", 00:05:19.563 "bdev_lvol_rename_lvstore", 00:05:19.563 "bdev_lvol_create_lvstore", 00:05:19.563 "bdev_raid_set_options", 00:05:19.563 "bdev_raid_remove_base_bdev", 00:05:19.563 "bdev_raid_add_base_bdev", 00:05:19.563 "bdev_raid_delete", 00:05:19.563 "bdev_raid_create", 00:05:19.563 "bdev_raid_get_bdevs", 00:05:19.563 "bdev_error_inject_error", 00:05:19.563 "bdev_error_delete", 00:05:19.563 "bdev_error_create", 00:05:19.563 "bdev_split_delete", 00:05:19.563 "bdev_split_create", 00:05:19.563 "bdev_delay_delete", 00:05:19.563 "bdev_delay_create", 00:05:19.563 "bdev_delay_update_latency", 00:05:19.563 "bdev_zone_block_delete", 00:05:19.563 "bdev_zone_block_create", 00:05:19.563 "blobfs_create", 00:05:19.563 "blobfs_detect", 00:05:19.563 "blobfs_set_cache_size", 00:05:19.563 "bdev_aio_delete", 00:05:19.563 "bdev_aio_rescan", 00:05:19.563 "bdev_aio_create", 00:05:19.563 "bdev_ftl_set_property", 00:05:19.563 "bdev_ftl_get_properties", 00:05:19.563 "bdev_ftl_get_stats", 00:05:19.563 "bdev_ftl_unmap", 00:05:19.563 "bdev_ftl_unload", 00:05:19.563 "bdev_ftl_delete", 00:05:19.563 "bdev_ftl_load", 00:05:19.563 "bdev_ftl_create", 00:05:19.563 "bdev_virtio_attach_controller", 00:05:19.563 "bdev_virtio_scsi_get_devices", 00:05:19.563 "bdev_virtio_detach_controller", 00:05:19.563 "bdev_virtio_blk_set_hotplug", 00:05:19.563 "bdev_iscsi_delete", 00:05:19.563 "bdev_iscsi_create", 00:05:19.563 "bdev_iscsi_set_options", 00:05:19.563 "accel_error_inject_error", 00:05:19.563 "ioat_scan_accel_module", 00:05:19.563 "dsa_scan_accel_module", 00:05:19.563 "iaa_scan_accel_module", 00:05:19.563 "vfu_virtio_create_scsi_endpoint", 00:05:19.563 "vfu_virtio_scsi_remove_target", 00:05:19.563 "vfu_virtio_scsi_add_target", 00:05:19.563 "vfu_virtio_create_blk_endpoint", 00:05:19.563 "vfu_virtio_delete_endpoint", 00:05:19.563 "keyring_file_remove_key", 00:05:19.563 "keyring_file_add_key", 00:05:19.563 "keyring_linux_set_options", 00:05:19.563 "iscsi_get_histogram", 00:05:19.563 "iscsi_enable_histogram", 00:05:19.563 "iscsi_set_options", 00:05:19.563 "iscsi_get_auth_groups", 00:05:19.563 "iscsi_auth_group_remove_secret", 00:05:19.563 "iscsi_auth_group_add_secret", 00:05:19.563 "iscsi_delete_auth_group", 00:05:19.563 "iscsi_create_auth_group", 00:05:19.563 "iscsi_set_discovery_auth", 00:05:19.563 "iscsi_get_options", 00:05:19.563 "iscsi_target_node_request_logout", 00:05:19.563 "iscsi_target_node_set_redirect", 00:05:19.563 "iscsi_target_node_set_auth", 00:05:19.563 "iscsi_target_node_add_lun", 00:05:19.563 "iscsi_get_stats", 00:05:19.563 "iscsi_get_connections", 00:05:19.563 "iscsi_portal_group_set_auth", 00:05:19.563 "iscsi_start_portal_group", 00:05:19.563 "iscsi_delete_portal_group", 00:05:19.563 "iscsi_create_portal_group", 00:05:19.563 "iscsi_get_portal_groups", 00:05:19.563 "iscsi_delete_target_node", 00:05:19.563 "iscsi_target_node_remove_pg_ig_maps", 00:05:19.563 "iscsi_target_node_add_pg_ig_maps", 00:05:19.563 "iscsi_create_target_node", 00:05:19.563 "iscsi_get_target_nodes", 00:05:19.563 "iscsi_delete_initiator_group", 00:05:19.563 "iscsi_initiator_group_remove_initiators", 00:05:19.563 "iscsi_initiator_group_add_initiators", 00:05:19.563 "iscsi_create_initiator_group", 00:05:19.563 "iscsi_get_initiator_groups", 00:05:19.563 "nvmf_set_crdt", 00:05:19.563 "nvmf_set_config", 00:05:19.563 "nvmf_set_max_subsystems", 00:05:19.563 "nvmf_stop_mdns_prr", 00:05:19.563 "nvmf_publish_mdns_prr", 00:05:19.563 "nvmf_subsystem_get_listeners", 00:05:19.563 "nvmf_subsystem_get_qpairs", 00:05:19.563 "nvmf_subsystem_get_controllers", 00:05:19.563 "nvmf_get_stats", 00:05:19.563 "nvmf_get_transports", 00:05:19.563 "nvmf_create_transport", 00:05:19.563 "nvmf_get_targets", 00:05:19.563 "nvmf_delete_target", 00:05:19.563 "nvmf_create_target", 00:05:19.563 "nvmf_subsystem_allow_any_host", 00:05:19.563 "nvmf_subsystem_remove_host", 00:05:19.563 "nvmf_subsystem_add_host", 00:05:19.563 "nvmf_ns_remove_host", 00:05:19.563 "nvmf_ns_add_host", 00:05:19.563 "nvmf_subsystem_remove_ns", 00:05:19.563 "nvmf_subsystem_add_ns", 00:05:19.563 "nvmf_subsystem_listener_set_ana_state", 00:05:19.563 "nvmf_discovery_get_referrals", 00:05:19.563 "nvmf_discovery_remove_referral", 00:05:19.563 "nvmf_discovery_add_referral", 00:05:19.563 "nvmf_subsystem_remove_listener", 00:05:19.563 "nvmf_subsystem_add_listener", 00:05:19.563 "nvmf_delete_subsystem", 00:05:19.563 "nvmf_create_subsystem", 00:05:19.563 "nvmf_get_subsystems", 00:05:19.563 "env_dpdk_get_mem_stats", 00:05:19.563 "nbd_get_disks", 00:05:19.563 "nbd_stop_disk", 00:05:19.563 "nbd_start_disk", 00:05:19.563 "ublk_recover_disk", 00:05:19.563 "ublk_get_disks", 00:05:19.563 "ublk_stop_disk", 00:05:19.563 "ublk_start_disk", 00:05:19.563 "ublk_destroy_target", 00:05:19.563 "ublk_create_target", 00:05:19.563 "virtio_blk_create_transport", 00:05:19.563 "virtio_blk_get_transports", 00:05:19.563 "vhost_controller_set_coalescing", 00:05:19.563 "vhost_get_controllers", 00:05:19.563 "vhost_delete_controller", 00:05:19.563 "vhost_create_blk_controller", 00:05:19.563 "vhost_scsi_controller_remove_target", 00:05:19.563 "vhost_scsi_controller_add_target", 00:05:19.563 "vhost_start_scsi_controller", 00:05:19.563 "vhost_create_scsi_controller", 00:05:19.563 "thread_set_cpumask", 00:05:19.563 "framework_get_governor", 00:05:19.563 "framework_get_scheduler", 00:05:19.563 "framework_set_scheduler", 00:05:19.563 "framework_get_reactors", 00:05:19.563 "thread_get_io_channels", 00:05:19.563 "thread_get_pollers", 00:05:19.563 "thread_get_stats", 00:05:19.563 "framework_monitor_context_switch", 00:05:19.563 "spdk_kill_instance", 00:05:19.563 "log_enable_timestamps", 00:05:19.563 "log_get_flags", 00:05:19.563 "log_clear_flag", 00:05:19.563 "log_set_flag", 00:05:19.563 "log_get_level", 00:05:19.563 "log_set_level", 00:05:19.563 "log_get_print_level", 00:05:19.563 "log_set_print_level", 00:05:19.563 "framework_enable_cpumask_locks", 00:05:19.563 "framework_disable_cpumask_locks", 00:05:19.563 "framework_wait_init", 00:05:19.563 "framework_start_init", 00:05:19.563 "scsi_get_devices", 00:05:19.563 "bdev_get_histogram", 00:05:19.563 "bdev_enable_histogram", 00:05:19.563 "bdev_set_qos_limit", 00:05:19.564 "bdev_set_qd_sampling_period", 00:05:19.564 "bdev_get_bdevs", 00:05:19.564 "bdev_reset_iostat", 00:05:19.564 "bdev_get_iostat", 00:05:19.564 "bdev_examine", 00:05:19.564 "bdev_wait_for_examine", 00:05:19.564 "bdev_set_options", 00:05:19.564 "notify_get_notifications", 00:05:19.564 "notify_get_types", 00:05:19.564 "accel_get_stats", 00:05:19.564 "accel_set_options", 00:05:19.564 "accel_set_driver", 00:05:19.564 "accel_crypto_key_destroy", 00:05:19.564 "accel_crypto_keys_get", 00:05:19.564 "accel_crypto_key_create", 00:05:19.564 "accel_assign_opc", 00:05:19.564 "accel_get_module_info", 00:05:19.564 "accel_get_opc_assignments", 00:05:19.564 "vmd_rescan", 00:05:19.564 "vmd_remove_device", 00:05:19.564 "vmd_enable", 00:05:19.564 "sock_get_default_impl", 00:05:19.564 "sock_set_default_impl", 00:05:19.564 "sock_impl_set_options", 00:05:19.564 "sock_impl_get_options", 00:05:19.564 "iobuf_get_stats", 00:05:19.564 "iobuf_set_options", 00:05:19.564 "keyring_get_keys", 00:05:19.564 "framework_get_pci_devices", 00:05:19.564 "framework_get_config", 00:05:19.564 "framework_get_subsystems", 00:05:19.564 "vfu_tgt_set_base_path", 00:05:19.564 "trace_get_info", 00:05:19.564 "trace_get_tpoint_group_mask", 00:05:19.564 "trace_disable_tpoint_group", 00:05:19.564 "trace_enable_tpoint_group", 00:05:19.564 "trace_clear_tpoint_mask", 00:05:19.564 "trace_set_tpoint_mask", 00:05:19.564 "spdk_get_version", 00:05:19.564 "rpc_get_methods" 00:05:19.564 ] 00:05:19.564 18:42:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:19.564 18:42:04 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.564 18:42:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.564 18:42:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:19.564 18:42:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2305025 00:05:19.824 18:42:04 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2305025 ']' 00:05:19.824 18:42:04 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2305025 00:05:19.824 18:42:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:19.824 18:42:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.824 18:42:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2305025 00:05:19.824 18:42:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.824 18:42:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.824 18:42:04 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2305025' 00:05:19.824 killing process with pid 2305025 00:05:19.824 18:42:04 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2305025 00:05:19.824 18:42:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2305025 00:05:20.084 00:05:20.084 real 0m1.771s 00:05:20.084 user 0m3.441s 00:05:20.084 sys 0m0.473s 00:05:20.084 18:42:04 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.084 18:42:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.084 ************************************ 00:05:20.084 END TEST spdkcli_tcp 00:05:20.084 ************************************ 00:05:20.084 18:42:04 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:20.084 18:42:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.084 18:42:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.084 18:42:04 -- common/autotest_common.sh@10 -- # set +x 00:05:20.084 ************************************ 00:05:20.084 START TEST dpdk_mem_utility 00:05:20.084 ************************************ 00:05:20.084 18:42:05 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:20.343 * Looking for test storage... 00:05:20.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:20.343 18:42:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.343 18:42:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2305363 00:05:20.344 18:42:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2305363 00:05:20.344 18:42:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.344 18:42:05 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2305363 ']' 00:05:20.344 18:42:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.344 18:42:05 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.344 18:42:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.344 18:42:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.344 18:42:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.344 [2024-07-24 18:42:05.178424] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:20.344 [2024-07-24 18:42:05.178482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305363 ] 00:05:20.344 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.344 [2024-07-24 18:42:05.261645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.344 [2024-07-24 18:42:05.351514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.281 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.281 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:21.281 18:42:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:21.281 18:42:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:21.282 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.282 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.282 { 00:05:21.282 "filename": "/tmp/spdk_mem_dump.txt" 00:05:21.282 } 00:05:21.282 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.282 18:42:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:21.282 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:21.282 1 heaps totaling size 814.000000 MiB 00:05:21.282 size: 814.000000 MiB heap id: 0 00:05:21.282 end heaps---------- 00:05:21.282 8 mempools totaling size 598.116089 MiB 00:05:21.282 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:21.282 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:21.282 size: 84.521057 MiB name: bdev_io_2305363 00:05:21.282 size: 51.011292 MiB name: evtpool_2305363 00:05:21.282 size: 50.003479 MiB name: msgpool_2305363 00:05:21.282 size: 21.763794 MiB name: PDU_Pool 00:05:21.282 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:21.282 size: 0.026123 MiB name: Session_Pool 00:05:21.282 end mempools------- 00:05:21.282 6 memzones totaling size 4.142822 MiB 00:05:21.282 size: 1.000366 MiB name: RG_ring_0_2305363 00:05:21.282 size: 1.000366 MiB name: RG_ring_1_2305363 00:05:21.282 size: 1.000366 MiB name: RG_ring_4_2305363 00:05:21.282 size: 1.000366 MiB name: RG_ring_5_2305363 00:05:21.282 size: 0.125366 MiB name: RG_ring_2_2305363 00:05:21.282 size: 0.015991 MiB name: RG_ring_3_2305363 00:05:21.282 end memzones------- 00:05:21.282 18:42:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:21.282 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:21.282 list of free elements. size: 12.519348 MiB 00:05:21.282 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:21.282 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:21.282 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:21.282 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:21.282 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:21.282 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:21.282 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:21.282 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:21.282 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:21.282 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:21.282 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:21.282 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:21.282 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:21.282 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:21.282 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:21.282 list of standard malloc elements. size: 199.218079 MiB 00:05:21.282 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:21.282 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:21.282 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:21.282 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:21.282 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:21.282 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:21.282 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:21.282 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:21.282 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:21.282 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:21.282 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:21.282 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:21.282 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:21.282 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:21.282 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:21.282 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:21.282 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:21.282 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:21.282 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:21.282 list of memzone associated elements. size: 602.262573 MiB 00:05:21.282 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:21.282 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:21.282 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:21.282 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:21.282 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:21.282 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2305363_0 00:05:21.282 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:21.282 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2305363_0 00:05:21.282 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:21.282 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2305363_0 00:05:21.282 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:21.282 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:21.282 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:21.282 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:21.282 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:21.282 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2305363 00:05:21.282 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:21.282 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2305363 00:05:21.282 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:21.282 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2305363 00:05:21.282 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:21.282 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:21.282 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:21.282 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:21.282 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:21.282 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:21.282 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:21.282 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:21.282 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:21.282 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2305363 00:05:21.282 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:21.282 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2305363 00:05:21.282 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:21.282 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2305363 00:05:21.282 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:21.282 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2305363 00:05:21.282 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:21.282 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2305363 00:05:21.282 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:21.282 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:21.282 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:21.282 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:21.282 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:21.282 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:21.282 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:21.282 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2305363 00:05:21.282 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:21.282 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:21.282 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:21.282 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:21.282 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:21.282 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2305363 00:05:21.282 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:21.282 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:21.282 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:21.282 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2305363 00:05:21.282 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:21.282 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2305363 00:05:21.282 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:21.282 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:21.283 18:42:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:21.283 18:42:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2305363 00:05:21.283 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2305363 ']' 00:05:21.283 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2305363 00:05:21.283 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:21.283 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.283 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2305363 00:05:21.542 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.542 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.542 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2305363' 00:05:21.542 killing process with pid 2305363 00:05:21.542 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2305363 00:05:21.542 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2305363 00:05:21.802 00:05:21.802 real 0m1.607s 00:05:21.802 user 0m1.794s 00:05:21.802 sys 0m0.458s 00:05:21.802 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.802 18:42:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.802 ************************************ 00:05:21.802 END TEST dpdk_mem_utility 00:05:21.802 ************************************ 00:05:21.802 18:42:06 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:21.802 18:42:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.802 18:42:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.802 18:42:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.802 ************************************ 00:05:21.802 START TEST event 00:05:21.802 ************************************ 00:05:21.802 18:42:06 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:21.802 * Looking for test storage... 00:05:21.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:21.802 18:42:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:21.802 18:42:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:21.802 18:42:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.802 18:42:06 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:21.802 18:42:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.802 18:42:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.061 ************************************ 00:05:22.061 START TEST event_perf 00:05:22.061 ************************************ 00:05:22.061 18:42:06 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:22.061 Running I/O for 1 seconds...[2024-07-24 18:42:06.847721] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:22.061 [2024-07-24 18:42:06.847788] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305693 ] 00:05:22.061 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.061 [2024-07-24 18:42:06.930731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.061 [2024-07-24 18:42:07.024297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.061 [2024-07-24 18:42:07.024408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.061 [2024-07-24 18:42:07.024520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.061 [2024-07-24 18:42:07.024521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.439 Running I/O for 1 seconds... 00:05:23.439 lcore 0: 102795 00:05:23.439 lcore 1: 102798 00:05:23.439 lcore 2: 102800 00:05:23.439 lcore 3: 102798 00:05:23.439 done. 00:05:23.439 00:05:23.439 real 0m1.278s 00:05:23.439 user 0m4.170s 00:05:23.439 sys 0m0.097s 00:05:23.439 18:42:08 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.439 18:42:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.439 ************************************ 00:05:23.439 END TEST event_perf 00:05:23.439 ************************************ 00:05:23.439 18:42:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:23.439 18:42:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:23.439 18:42:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.439 18:42:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.439 ************************************ 00:05:23.439 START TEST event_reactor 00:05:23.439 ************************************ 00:05:23.439 18:42:08 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:23.439 [2024-07-24 18:42:08.186911] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:23.439 [2024-07-24 18:42:08.186981] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306107 ] 00:05:23.439 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.439 [2024-07-24 18:42:08.269435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.439 [2024-07-24 18:42:08.355508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.816 test_start 00:05:24.816 oneshot 00:05:24.816 tick 100 00:05:24.816 tick 100 00:05:24.816 tick 250 00:05:24.816 tick 100 00:05:24.816 tick 100 00:05:24.816 tick 250 00:05:24.816 tick 100 00:05:24.816 tick 500 00:05:24.816 tick 100 00:05:24.816 tick 100 00:05:24.816 tick 250 00:05:24.816 tick 100 00:05:24.816 tick 100 00:05:24.816 test_end 00:05:24.816 00:05:24.816 real 0m1.269s 00:05:24.817 user 0m1.161s 00:05:24.817 sys 0m0.101s 00:05:24.817 18:42:09 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.817 18:42:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:24.817 ************************************ 00:05:24.817 END TEST event_reactor 00:05:24.817 ************************************ 00:05:24.817 18:42:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.817 18:42:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:24.817 18:42:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.817 18:42:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.817 ************************************ 00:05:24.817 START TEST event_reactor_perf 00:05:24.817 ************************************ 00:05:24.817 18:42:09 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.817 [2024-07-24 18:42:09.518799] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:24.817 [2024-07-24 18:42:09.518850] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306618 ] 00:05:24.817 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.817 [2024-07-24 18:42:09.599046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.817 [2024-07-24 18:42:09.685994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.196 test_start 00:05:26.196 test_end 00:05:26.196 Performance: 312746 events per second 00:05:26.196 00:05:26.196 real 0m1.264s 00:05:26.196 user 0m1.170s 00:05:26.196 sys 0m0.088s 00:05:26.196 18:42:10 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.196 18:42:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.196 ************************************ 00:05:26.196 END TEST event_reactor_perf 00:05:26.196 ************************************ 00:05:26.196 18:42:10 event -- event/event.sh@49 -- # uname -s 00:05:26.196 18:42:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:26.196 18:42:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:26.196 18:42:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.196 18:42:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.196 18:42:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.196 ************************************ 00:05:26.196 START TEST event_scheduler 00:05:26.196 ************************************ 00:05:26.197 18:42:10 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:26.197 * Looking for test storage... 00:05:26.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:26.197 18:42:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:26.197 18:42:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2306955 00:05:26.197 18:42:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.197 18:42:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:26.197 18:42:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2306955 00:05:26.197 18:42:10 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2306955 ']' 00:05:26.197 18:42:10 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.197 18:42:10 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.197 18:42:10 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.197 18:42:10 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.197 18:42:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.197 [2024-07-24 18:42:10.983796] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:26.197 [2024-07-24 18:42:10.983861] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306955 ] 00:05:26.197 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.197 [2024-07-24 18:42:11.100547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.456 [2024-07-24 18:42:11.254359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.456 [2024-07-24 18:42:11.254455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.456 [2024-07-24 18:42:11.254568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.456 [2024-07-24 18:42:11.254578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.025 18:42:11 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.025 18:42:11 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:27.025 18:42:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:27.025 18:42:11 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.025 18:42:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.025 [2024-07-24 18:42:11.950004] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:27.025 [2024-07-24 18:42:11.950049] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:27.025 [2024-07-24 18:42:11.950075] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:27.025 [2024-07-24 18:42:11.950092] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:27.025 [2024-07-24 18:42:11.950109] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:27.025 18:42:11 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.025 18:42:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:27.025 18:42:11 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.025 18:42:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 [2024-07-24 18:42:12.056822] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:27.285 18:42:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:27.285 18:42:12 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.285 18:42:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 ************************************ 00:05:27.285 START TEST scheduler_create_thread 00:05:27.285 ************************************ 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 2 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 3 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 4 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 5 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 6 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 7 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 8 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 9 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 10 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.285 18:42:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.257 18:42:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.257 18:42:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:28.257 18:42:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.257 18:42:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.636 18:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.636 18:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:29.636 18:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:29.636 18:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.636 18:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.574 18:42:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.574 00:05:30.574 real 0m3.386s 00:05:30.574 user 0m0.023s 00:05:30.574 sys 0m0.006s 00:05:30.574 18:42:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.574 18:42:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.574 ************************************ 00:05:30.574 END TEST scheduler_create_thread 00:05:30.574 ************************************ 00:05:30.574 18:42:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:30.574 18:42:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2306955 00:05:30.574 18:42:15 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2306955 ']' 00:05:30.574 18:42:15 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2306955 00:05:30.574 18:42:15 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:30.574 18:42:15 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.574 18:42:15 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2306955 00:05:30.574 18:42:15 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:30.574 18:42:15 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:30.574 18:42:15 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2306955' 00:05:30.574 killing process with pid 2306955 00:05:30.574 18:42:15 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2306955 00:05:30.574 18:42:15 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2306955 00:05:31.143 [2024-07-24 18:42:15.859792] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:31.403 00:05:31.403 real 0m5.361s 00:05:31.403 user 0m10.913s 00:05:31.403 sys 0m0.455s 00:05:31.403 18:42:16 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.403 18:42:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.403 ************************************ 00:05:31.403 END TEST event_scheduler 00:05:31.403 ************************************ 00:05:31.403 18:42:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:31.403 18:42:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:31.403 18:42:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.403 18:42:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.403 18:42:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.403 ************************************ 00:05:31.403 START TEST app_repeat 00:05:31.403 ************************************ 00:05:31.403 18:42:16 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2308051 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2308051' 00:05:31.403 Process app_repeat pid: 2308051 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:31.403 spdk_app_start Round 0 00:05:31.403 18:42:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2308051 /var/tmp/spdk-nbd.sock 00:05:31.403 18:42:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2308051 ']' 00:05:31.403 18:42:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.403 18:42:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.403 18:42:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.403 18:42:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.403 18:42:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.403 [2024-07-24 18:42:16.321211] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:31.403 [2024-07-24 18:42:16.321270] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308051 ] 00:05:31.403 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.403 [2024-07-24 18:42:16.403934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.662 [2024-07-24 18:42:16.498401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.662 [2024-07-24 18:42:16.498406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.662 18:42:16 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.662 18:42:16 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:31.662 18:42:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.921 Malloc0 00:05:31.921 18:42:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.181 Malloc1 00:05:32.181 18:42:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.181 18:42:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.440 /dev/nbd0 00:05:32.440 18:42:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.440 18:42:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.440 1+0 records in 00:05:32.440 1+0 records out 00:05:32.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248079 s, 16.5 MB/s 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:32.440 18:42:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:32.440 18:42:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.440 18:42:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.440 18:42:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.708 /dev/nbd1 00:05:32.708 18:42:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.708 18:42:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.708 18:42:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:32.708 18:42:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:32.708 18:42:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:32.708 18:42:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:32.709 18:42:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:32.709 18:42:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:32.709 18:42:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:32.709 18:42:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:32.709 18:42:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.709 1+0 records in 00:05:32.709 1+0 records out 00:05:32.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024828 s, 16.5 MB/s 00:05:32.709 18:42:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.709 18:42:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:32.709 18:42:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.709 18:42:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:32.709 18:42:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:32.709 18:42:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.709 18:42:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.709 18:42:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.709 18:42:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.709 18:42:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.968 { 00:05:32.968 "nbd_device": "/dev/nbd0", 00:05:32.968 "bdev_name": "Malloc0" 00:05:32.968 }, 00:05:32.968 { 00:05:32.968 "nbd_device": "/dev/nbd1", 00:05:32.968 "bdev_name": "Malloc1" 00:05:32.968 } 00:05:32.968 ]' 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.968 { 00:05:32.968 "nbd_device": "/dev/nbd0", 00:05:32.968 "bdev_name": "Malloc0" 00:05:32.968 }, 00:05:32.968 { 00:05:32.968 "nbd_device": "/dev/nbd1", 00:05:32.968 "bdev_name": "Malloc1" 00:05:32.968 } 00:05:32.968 ]' 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.968 /dev/nbd1' 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.968 /dev/nbd1' 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.968 256+0 records in 00:05:32.968 256+0 records out 00:05:32.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0097949 s, 107 MB/s 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.968 256+0 records in 00:05:32.968 256+0 records out 00:05:32.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198243 s, 52.9 MB/s 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.968 18:42:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.227 256+0 records in 00:05:33.227 256+0 records out 00:05:33.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209643 s, 50.0 MB/s 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.227 18:42:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.227 18:42:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.227 18:42:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.227 18:42:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.227 18:42:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.228 18:42:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.228 18:42:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.228 18:42:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.487 18:42:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.487 18:42:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.487 18:42:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.487 18:42:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.487 18:42:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.487 18:42:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.487 18:42:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.487 18:42:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.487 18:42:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.487 18:42:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.746 18:42:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.005 18:42:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.006 18:42:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.264 18:42:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.524 [2024-07-24 18:42:19.339681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.524 [2024-07-24 18:42:19.423094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.524 [2024-07-24 18:42:19.423099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.524 [2024-07-24 18:42:19.467484] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.524 [2024-07-24 18:42:19.467536] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.814 18:42:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.814 18:42:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:37.814 spdk_app_start Round 1 00:05:37.814 18:42:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2308051 /var/tmp/spdk-nbd.sock 00:05:37.814 18:42:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2308051 ']' 00:05:37.814 18:42:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.814 18:42:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.814 18:42:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.814 18:42:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.814 18:42:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.814 18:42:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.814 18:42:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:37.814 18:42:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.814 Malloc0 00:05:37.814 18:42:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.073 Malloc1 00:05:38.073 18:42:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.073 18:42:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.332 /dev/nbd0 00:05:38.332 18:42:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.332 18:42:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.332 18:42:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:38.332 18:42:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:38.332 18:42:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.332 18:42:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.332 18:42:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:38.332 18:42:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:38.333 18:42:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.333 18:42:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.333 18:42:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.333 1+0 records in 00:05:38.333 1+0 records out 00:05:38.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235465 s, 17.4 MB/s 00:05:38.333 18:42:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.333 18:42:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:38.333 18:42:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.333 18:42:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.333 18:42:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:38.333 18:42:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.333 18:42:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.333 18:42:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.591 /dev/nbd1 00:05:38.591 18:42:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.591 18:42:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.591 1+0 records in 00:05:38.591 1+0 records out 00:05:38.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241503 s, 17.0 MB/s 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.591 18:42:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:38.591 18:42:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.591 18:42:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.591 18:42:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.591 18:42:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.591 18:42:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.851 { 00:05:38.851 "nbd_device": "/dev/nbd0", 00:05:38.851 "bdev_name": "Malloc0" 00:05:38.851 }, 00:05:38.851 { 00:05:38.851 "nbd_device": "/dev/nbd1", 00:05:38.851 "bdev_name": "Malloc1" 00:05:38.851 } 00:05:38.851 ]' 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.851 { 00:05:38.851 "nbd_device": "/dev/nbd0", 00:05:38.851 "bdev_name": "Malloc0" 00:05:38.851 }, 00:05:38.851 { 00:05:38.851 "nbd_device": "/dev/nbd1", 00:05:38.851 "bdev_name": "Malloc1" 00:05:38.851 } 00:05:38.851 ]' 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.851 /dev/nbd1' 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.851 /dev/nbd1' 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.851 256+0 records in 00:05:38.851 256+0 records out 00:05:38.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00981299 s, 107 MB/s 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.851 256+0 records in 00:05:38.851 256+0 records out 00:05:38.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200462 s, 52.3 MB/s 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.851 18:42:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.111 256+0 records in 00:05:39.111 256+0 records out 00:05:39.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208103 s, 50.4 MB/s 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.111 18:42:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.370 18:42:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.370 18:42:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.370 18:42:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.370 18:42:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.370 18:42:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.370 18:42:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.370 18:42:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.370 18:42:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.370 18:42:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.370 18:42:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.629 18:42:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.888 18:42:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.888 18:42:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.149 18:42:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.407 [2024-07-24 18:42:25.158541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.407 [2024-07-24 18:42:25.239524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.407 [2024-07-24 18:42:25.239529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.407 [2024-07-24 18:42:25.284898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.407 [2024-07-24 18:42:25.284943] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.704 18:42:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.704 18:42:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:43.704 spdk_app_start Round 2 00:05:43.704 18:42:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2308051 /var/tmp/spdk-nbd.sock 00:05:43.704 18:42:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2308051 ']' 00:05:43.704 18:42:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.704 18:42:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.704 18:42:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.704 18:42:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.704 18:42:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.704 18:42:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.704 18:42:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:43.704 18:42:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.704 Malloc0 00:05:43.704 18:42:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.963 Malloc1 00:05:43.963 18:42:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.963 18:42:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.221 /dev/nbd0 00:05:44.221 18:42:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.221 18:42:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.221 1+0 records in 00:05:44.221 1+0 records out 00:05:44.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230388 s, 17.8 MB/s 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:44.221 18:42:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:44.221 18:42:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.221 18:42:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.221 18:42:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.479 /dev/nbd1 00:05:44.479 18:42:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.479 18:42:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.479 18:42:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:44.479 18:42:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:44.479 18:42:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:44.479 18:42:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:44.479 18:42:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:44.479 18:42:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:44.479 18:42:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:44.479 18:42:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:44.479 18:42:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.479 1+0 records in 00:05:44.479 1+0 records out 00:05:44.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222814 s, 18.4 MB/s 00:05:44.480 18:42:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.480 18:42:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:44.480 18:42:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.480 18:42:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:44.480 18:42:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:44.480 18:42:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.480 18:42:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.480 18:42:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.480 18:42:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.480 18:42:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.738 { 00:05:44.738 "nbd_device": "/dev/nbd0", 00:05:44.738 "bdev_name": "Malloc0" 00:05:44.738 }, 00:05:44.738 { 00:05:44.738 "nbd_device": "/dev/nbd1", 00:05:44.738 "bdev_name": "Malloc1" 00:05:44.738 } 00:05:44.738 ]' 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.738 { 00:05:44.738 "nbd_device": "/dev/nbd0", 00:05:44.738 "bdev_name": "Malloc0" 00:05:44.738 }, 00:05:44.738 { 00:05:44.738 "nbd_device": "/dev/nbd1", 00:05:44.738 "bdev_name": "Malloc1" 00:05:44.738 } 00:05:44.738 ]' 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.738 /dev/nbd1' 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.738 /dev/nbd1' 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.738 256+0 records in 00:05:44.738 256+0 records out 00:05:44.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00977351 s, 107 MB/s 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.738 256+0 records in 00:05:44.738 256+0 records out 00:05:44.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019829 s, 52.9 MB/s 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.738 256+0 records in 00:05:44.738 256+0 records out 00:05:44.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207441 s, 50.5 MB/s 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.738 18:42:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.997 18:42:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.997 18:42:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.997 18:42:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.997 18:42:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.997 18:42:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.997 18:42:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.997 18:42:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.997 18:42:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.997 18:42:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.997 18:42:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.254 18:42:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.254 18:42:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.254 18:42:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.254 18:42:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.254 18:42:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.254 18:42:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.512 18:42:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.512 18:42:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.512 18:42:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.512 18:42:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.512 18:42:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.771 18:42:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.771 18:42:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.030 18:42:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.290 [2024-07-24 18:42:31.057778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.290 [2024-07-24 18:42:31.138841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.290 [2024-07-24 18:42:31.138845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.290 [2024-07-24 18:42:31.183353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.290 [2024-07-24 18:42:31.183397] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.869 18:42:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2308051 /var/tmp/spdk-nbd.sock 00:05:48.869 18:42:33 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2308051 ']' 00:05:48.869 18:42:33 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.869 18:42:33 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.869 18:42:33 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.869 18:42:33 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.869 18:42:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.127 18:42:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.127 18:42:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:49.127 18:42:34 event.app_repeat -- event/event.sh@39 -- # killprocess 2308051 00:05:49.127 18:42:34 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2308051 ']' 00:05:49.128 18:42:34 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2308051 00:05:49.128 18:42:34 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:49.128 18:42:34 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.128 18:42:34 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2308051 00:05:49.386 18:42:34 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.386 18:42:34 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.386 18:42:34 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2308051' 00:05:49.386 killing process with pid 2308051 00:05:49.386 18:42:34 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2308051 00:05:49.386 18:42:34 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2308051 00:05:49.386 spdk_app_start is called in Round 0. 00:05:49.386 Shutdown signal received, stop current app iteration 00:05:49.386 Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 reinitialization... 00:05:49.386 spdk_app_start is called in Round 1. 00:05:49.386 Shutdown signal received, stop current app iteration 00:05:49.386 Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 reinitialization... 00:05:49.386 spdk_app_start is called in Round 2. 00:05:49.386 Shutdown signal received, stop current app iteration 00:05:49.386 Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 reinitialization... 00:05:49.386 spdk_app_start is called in Round 3. 00:05:49.386 Shutdown signal received, stop current app iteration 00:05:49.386 18:42:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:49.386 18:42:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:49.386 00:05:49.386 real 0m18.059s 00:05:49.386 user 0m40.202s 00:05:49.386 sys 0m2.934s 00:05:49.387 18:42:34 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.387 18:42:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.387 ************************************ 00:05:49.387 END TEST app_repeat 00:05:49.387 ************************************ 00:05:49.387 18:42:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:49.387 18:42:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:49.387 18:42:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.387 18:42:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.387 18:42:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.646 ************************************ 00:05:49.646 START TEST cpu_locks 00:05:49.646 ************************************ 00:05:49.646 18:42:34 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:49.646 * Looking for test storage... 00:05:49.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.646 18:42:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:49.646 18:42:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:49.646 18:42:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:49.646 18:42:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:49.646 18:42:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.646 18:42:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.646 18:42:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.646 ************************************ 00:05:49.646 START TEST default_locks 00:05:49.646 ************************************ 00:05:49.646 18:42:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:49.646 18:42:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2311437 00:05:49.646 18:42:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2311437 00:05:49.646 18:42:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.646 18:42:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2311437 ']' 00:05:49.646 18:42:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.646 18:42:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.646 18:42:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.646 18:42:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.646 18:42:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.646 [2024-07-24 18:42:34.583112] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:49.646 [2024-07-24 18:42:34.583172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311437 ] 00:05:49.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.906 [2024-07-24 18:42:34.665418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.906 [2024-07-24 18:42:34.758760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.165 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.165 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:50.165 18:42:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2311437 00:05:50.165 18:42:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2311437 00:05:50.165 18:42:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.424 lslocks: write error 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2311437 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2311437 ']' 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2311437 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2311437 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2311437' 00:05:50.424 killing process with pid 2311437 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2311437 00:05:50.424 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2311437 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2311437 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2311437 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2311437 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2311437 ']' 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2311437) - No such process 00:05:50.684 ERROR: process (pid: 2311437) is no longer running 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.684 00:05:50.684 real 0m1.133s 00:05:50.684 user 0m1.343s 00:05:50.684 sys 0m0.497s 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.684 18:42:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.684 ************************************ 00:05:50.684 END TEST default_locks 00:05:50.684 ************************************ 00:05:50.684 18:42:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:50.684 18:42:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.684 18:42:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.943 18:42:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.943 ************************************ 00:05:50.943 START TEST default_locks_via_rpc 00:05:50.943 ************************************ 00:05:50.943 18:42:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:50.943 18:42:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2311725 00:05:50.943 18:42:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2311725 00:05:50.943 18:42:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.943 18:42:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2311725 ']' 00:05:50.943 18:42:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.944 18:42:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.944 18:42:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.944 18:42:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.944 18:42:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.944 [2024-07-24 18:42:35.781750] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:50.944 [2024-07-24 18:42:35.781802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311725 ] 00:05:50.944 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.944 [2024-07-24 18:42:35.863920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.203 [2024-07-24 18:42:35.956794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2311725 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2311725 00:05:51.462 18:42:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.721 18:42:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2311725 00:05:51.721 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2311725 ']' 00:05:51.722 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2311725 00:05:51.722 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:51.722 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.722 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2311725 00:05:51.722 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.722 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.722 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2311725' 00:05:51.722 killing process with pid 2311725 00:05:51.722 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2311725 00:05:51.722 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2311725 00:05:51.981 00:05:51.981 real 0m1.152s 00:05:51.981 user 0m1.362s 00:05:51.981 sys 0m0.504s 00:05:51.981 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.981 18:42:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.981 ************************************ 00:05:51.981 END TEST default_locks_via_rpc 00:05:51.981 ************************************ 00:05:51.981 18:42:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:51.981 18:42:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.981 18:42:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.981 18:42:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.981 ************************************ 00:05:51.981 START TEST non_locking_app_on_locked_coremask 00:05:51.981 ************************************ 00:05:51.981 18:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:51.981 18:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2312009 00:05:51.981 18:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2312009 /var/tmp/spdk.sock 00:05:51.981 18:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.981 18:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2312009 ']' 00:05:51.981 18:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.981 18:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.981 18:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.981 18:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.981 18:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.240 [2024-07-24 18:42:36.997377] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:52.240 [2024-07-24 18:42:36.997431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312009 ] 00:05:52.240 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.240 [2024-07-24 18:42:37.067270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.240 [2024-07-24 18:42:37.161745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2312026 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2312026 /var/tmp/spdk2.sock 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2312026 ']' 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.499 18:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.499 [2024-07-24 18:42:37.503348] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:52.499 [2024-07-24 18:42:37.503407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312026 ] 00:05:52.757 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.757 [2024-07-24 18:42:37.612843] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.757 [2024-07-24 18:42:37.612872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.015 [2024-07-24 18:42:37.788852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.581 18:42:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.581 18:42:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.581 18:42:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2312009 00:05:53.581 18:42:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.581 18:42:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2312009 00:05:54.517 lslocks: write error 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2312009 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2312009 ']' 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2312009 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2312009 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2312009' 00:05:54.518 killing process with pid 2312009 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2312009 00:05:54.518 18:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2312009 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2312026 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2312026 ']' 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2312026 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2312026 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2312026' 00:05:55.085 killing process with pid 2312026 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2312026 00:05:55.085 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2312026 00:05:55.652 00:05:55.652 real 0m3.478s 00:05:55.652 user 0m3.989s 00:05:55.652 sys 0m1.150s 00:05:55.652 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.652 18:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.652 ************************************ 00:05:55.652 END TEST non_locking_app_on_locked_coremask 00:05:55.652 ************************************ 00:05:55.652 18:42:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:55.652 18:42:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.652 18:42:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.652 18:42:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.652 ************************************ 00:05:55.652 START TEST locking_app_on_unlocked_coremask 00:05:55.652 ************************************ 00:05:55.652 18:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:55.652 18:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2312638 00:05:55.653 18:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2312638 /var/tmp/spdk.sock 00:05:55.653 18:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:55.653 18:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2312638 ']' 00:05:55.653 18:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.653 18:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.653 18:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.653 18:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.653 18:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.653 [2024-07-24 18:42:40.589807] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:55.653 [2024-07-24 18:42:40.589914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312638 ] 00:05:55.653 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.911 [2024-07-24 18:42:40.707447] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.911 [2024-07-24 18:42:40.707477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.911 [2024-07-24 18:42:40.798340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2312848 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2312848 /var/tmp/spdk2.sock 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2312848 ']' 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.849 18:42:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.849 [2024-07-24 18:42:41.538084] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:56.849 [2024-07-24 18:42:41.538145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312848 ] 00:05:56.849 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.849 [2024-07-24 18:42:41.647067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.849 [2024-07-24 18:42:41.825299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.786 18:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.786 18:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.786 18:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2312848 00:05:57.786 18:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2312848 00:05:57.786 18:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.353 lslocks: write error 00:05:58.353 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2312638 00:05:58.353 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2312638 ']' 00:05:58.353 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2312638 00:05:58.353 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.353 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.353 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2312638 00:05:58.353 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.353 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.353 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2312638' 00:05:58.353 killing process with pid 2312638 00:05:58.354 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2312638 00:05:58.354 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2312638 00:05:59.289 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2312848 00:05:59.289 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2312848 ']' 00:05:59.289 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2312848 00:05:59.289 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:59.289 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.289 18:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2312848 00:05:59.289 18:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.289 18:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.289 18:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2312848' 00:05:59.289 killing process with pid 2312848 00:05:59.289 18:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2312848 00:05:59.289 18:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2312848 00:05:59.548 00:05:59.548 real 0m3.860s 00:05:59.548 user 0m4.279s 00:05:59.548 sys 0m1.159s 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.549 ************************************ 00:05:59.549 END TEST locking_app_on_unlocked_coremask 00:05:59.549 ************************************ 00:05:59.549 18:42:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:59.549 18:42:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.549 18:42:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.549 18:42:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.549 ************************************ 00:05:59.549 START TEST locking_app_on_locked_coremask 00:05:59.549 ************************************ 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2313407 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2313407 /var/tmp/spdk.sock 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2313407 ']' 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.549 18:42:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.549 [2024-07-24 18:42:44.482191] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:05:59.549 [2024-07-24 18:42:44.482252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313407 ] 00:05:59.549 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.809 [2024-07-24 18:42:44.565347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.809 [2024-07-24 18:42:44.652345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2313669 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2313669 /var/tmp/spdk2.sock 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2313669 /var/tmp/spdk2.sock 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2313669 /var/tmp/spdk2.sock 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2313669 ']' 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.779 18:42:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.779 [2024-07-24 18:42:45.478292] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:00.779 [2024-07-24 18:42:45.478355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313669 ] 00:06:00.779 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.779 [2024-07-24 18:42:45.588581] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2313407 has claimed it. 00:06:00.779 [2024-07-24 18:42:45.588635] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2313669) - No such process 00:06:01.348 ERROR: process (pid: 2313669) is no longer running 00:06:01.348 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.348 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:01.348 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:01.348 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.348 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.348 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.348 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2313407 00:06:01.348 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2313407 00:06:01.348 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.606 lslocks: write error 00:06:01.606 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2313407 00:06:01.606 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2313407 ']' 00:06:01.606 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2313407 00:06:01.606 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:01.606 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.606 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2313407 00:06:01.865 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.865 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.865 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2313407' 00:06:01.865 killing process with pid 2313407 00:06:01.865 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2313407 00:06:01.865 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2313407 00:06:02.124 00:06:02.124 real 0m2.537s 00:06:02.124 user 0m2.932s 00:06:02.124 sys 0m0.690s 00:06:02.124 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.124 18:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.124 ************************************ 00:06:02.124 END TEST locking_app_on_locked_coremask 00:06:02.124 ************************************ 00:06:02.124 18:42:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:02.124 18:42:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.124 18:42:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.124 18:42:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.124 ************************************ 00:06:02.124 START TEST locking_overlapped_coremask 00:06:02.124 ************************************ 00:06:02.124 18:42:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:02.124 18:42:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2313962 00:06:02.124 18:42:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2313962 /var/tmp/spdk.sock 00:06:02.124 18:42:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:02.124 18:42:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2313962 ']' 00:06:02.124 18:42:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.124 18:42:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.124 18:42:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.124 18:42:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.124 18:42:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.124 [2024-07-24 18:42:47.085429] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:02.124 [2024-07-24 18:42:47.085480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313962 ] 00:06:02.124 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.383 [2024-07-24 18:42:47.167497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.383 [2024-07-24 18:42:47.259528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.383 [2024-07-24 18:42:47.259643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.383 [2024-07-24 18:42:47.259644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2314188 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2314188 /var/tmp/spdk2.sock 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2314188 /var/tmp/spdk2.sock 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:03.321 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.322 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2314188 /var/tmp/spdk2.sock 00:06:03.322 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2314188 ']' 00:06:03.322 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.322 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.322 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.322 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.322 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.322 [2024-07-24 18:42:48.071141] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:03.322 [2024-07-24 18:42:48.071205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314188 ] 00:06:03.322 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.322 [2024-07-24 18:42:48.261987] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2313962 has claimed it. 00:06:03.322 [2024-07-24 18:42:48.262074] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2314188) - No such process 00:06:03.889 ERROR: process (pid: 2314188) is no longer running 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2313962 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2313962 ']' 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2313962 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2313962 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2313962' 00:06:03.889 killing process with pid 2313962 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2313962 00:06:03.889 18:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2313962 00:06:04.148 00:06:04.148 real 0m2.124s 00:06:04.148 user 0m6.030s 00:06:04.148 sys 0m0.505s 00:06:04.148 18:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.148 18:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.148 ************************************ 00:06:04.148 END TEST locking_overlapped_coremask 00:06:04.148 ************************************ 00:06:04.407 18:42:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:04.407 18:42:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.407 18:42:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.407 18:42:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.407 ************************************ 00:06:04.407 START TEST locking_overlapped_coremask_via_rpc 00:06:04.407 ************************************ 00:06:04.407 18:42:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:04.407 18:42:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2314338 00:06:04.407 18:42:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2314338 /var/tmp/spdk.sock 00:06:04.407 18:42:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:04.407 18:42:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2314338 ']' 00:06:04.407 18:42:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.407 18:42:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.408 18:42:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.408 18:42:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.408 18:42:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.408 [2024-07-24 18:42:49.316316] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:04.408 [2024-07-24 18:42:49.316428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314338 ] 00:06:04.408 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.667 [2024-07-24 18:42:49.440625] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.667 [2024-07-24 18:42:49.440661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.667 [2024-07-24 18:42:49.545136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.667 [2024-07-24 18:42:49.545246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.667 [2024-07-24 18:42:49.545246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.603 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.603 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:05.603 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2314539 00:06:05.603 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2314539 /var/tmp/spdk2.sock 00:06:05.603 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:05.603 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2314539 ']' 00:06:05.603 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.603 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.604 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.604 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.604 18:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.604 [2024-07-24 18:42:50.339477] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:05.604 [2024-07-24 18:42:50.339521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314539 ] 00:06:05.604 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.604 [2024-07-24 18:42:50.518667] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.604 [2024-07-24 18:42:50.518724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.863 [2024-07-24 18:42:50.820113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.863 [2024-07-24 18:42:50.820233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.863 [2024-07-24 18:42:50.820237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.433 [2024-07-24 18:42:51.356800] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2314338 has claimed it. 00:06:06.433 request: 00:06:06.433 { 00:06:06.433 "method": "framework_enable_cpumask_locks", 00:06:06.433 "req_id": 1 00:06:06.433 } 00:06:06.433 Got JSON-RPC error response 00:06:06.433 response: 00:06:06.433 { 00:06:06.433 "code": -32603, 00:06:06.433 "message": "Failed to claim CPU core: 2" 00:06:06.433 } 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2314338 /var/tmp/spdk.sock 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2314338 ']' 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.433 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.693 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.693 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.693 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2314539 /var/tmp/spdk2.sock 00:06:06.693 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2314539 ']' 00:06:06.693 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.693 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.693 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.693 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.693 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.952 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.952 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.952 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:06.952 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.952 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.952 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.952 00:06:06.952 real 0m2.666s 00:06:06.952 user 0m1.344s 00:06:06.952 sys 0m0.208s 00:06:06.952 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.952 18:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.952 ************************************ 00:06:06.952 END TEST locking_overlapped_coremask_via_rpc 00:06:06.952 ************************************ 00:06:06.952 18:42:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:06.952 18:42:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2314338 ]] 00:06:06.952 18:42:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2314338 00:06:06.952 18:42:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2314338 ']' 00:06:06.952 18:42:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2314338 00:06:06.952 18:42:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:06.952 18:42:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.952 18:42:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2314338 00:06:07.212 18:42:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.212 18:42:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.212 18:42:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2314338' 00:06:07.212 killing process with pid 2314338 00:06:07.212 18:42:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2314338 00:06:07.212 18:42:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2314338 00:06:07.471 18:42:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2314539 ]] 00:06:07.471 18:42:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2314539 00:06:07.471 18:42:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2314539 ']' 00:06:07.471 18:42:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2314539 00:06:07.471 18:42:52 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:07.471 18:42:52 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.471 18:42:52 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2314539 00:06:07.471 18:42:52 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:07.471 18:42:52 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:07.471 18:42:52 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2314539' 00:06:07.471 killing process with pid 2314539 00:06:07.471 18:42:52 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2314539 00:06:07.471 18:42:52 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2314539 00:06:08.041 18:42:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.041 18:42:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:08.041 18:42:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2314338 ]] 00:06:08.041 18:42:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2314338 00:06:08.041 18:42:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2314338 ']' 00:06:08.041 18:42:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2314338 00:06:08.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2314338) - No such process 00:06:08.041 18:42:52 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2314338 is not found' 00:06:08.041 Process with pid 2314338 is not found 00:06:08.041 18:42:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2314539 ]] 00:06:08.041 18:42:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2314539 00:06:08.041 18:42:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2314539 ']' 00:06:08.041 18:42:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2314539 00:06:08.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2314539) - No such process 00:06:08.041 18:42:52 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2314539 is not found' 00:06:08.041 Process with pid 2314539 is not found 00:06:08.041 18:42:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.041 00:06:08.041 real 0m18.454s 00:06:08.041 user 0m33.692s 00:06:08.041 sys 0m5.784s 00:06:08.041 18:42:52 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.041 18:42:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.041 ************************************ 00:06:08.041 END TEST cpu_locks 00:06:08.041 ************************************ 00:06:08.041 00:06:08.041 real 0m46.198s 00:06:08.041 user 1m31.493s 00:06:08.041 sys 0m9.824s 00:06:08.041 18:42:52 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.041 18:42:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.041 ************************************ 00:06:08.041 END TEST event 00:06:08.041 ************************************ 00:06:08.041 18:42:52 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:08.041 18:42:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.041 18:42:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.041 18:42:52 -- common/autotest_common.sh@10 -- # set +x 00:06:08.041 ************************************ 00:06:08.041 START TEST thread 00:06:08.041 ************************************ 00:06:08.041 18:42:52 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:08.301 * Looking for test storage... 00:06:08.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:08.302 18:42:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.302 18:42:53 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:08.302 18:42:53 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.302 18:42:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.302 ************************************ 00:06:08.302 START TEST thread_poller_perf 00:06:08.302 ************************************ 00:06:08.302 18:42:53 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.302 [2024-07-24 18:42:53.129624] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:08.302 [2024-07-24 18:42:53.129692] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315154 ] 00:06:08.302 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.302 [2024-07-24 18:42:53.210753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.302 [2024-07-24 18:42:53.298897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.302 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:09.678 ====================================== 00:06:09.678 busy:2209031782 (cyc) 00:06:09.678 total_run_count: 255000 00:06:09.678 tsc_hz: 2200000000 (cyc) 00:06:09.678 ====================================== 00:06:09.678 poller_cost: 8662 (cyc), 3937 (nsec) 00:06:09.678 00:06:09.678 real 0m1.278s 00:06:09.678 user 0m1.176s 00:06:09.678 sys 0m0.096s 00:06:09.678 18:42:54 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.678 18:42:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.678 ************************************ 00:06:09.678 END TEST thread_poller_perf 00:06:09.678 ************************************ 00:06:09.678 18:42:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.678 18:42:54 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:09.678 18:42:54 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.678 18:42:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.678 ************************************ 00:06:09.678 START TEST thread_poller_perf 00:06:09.678 ************************************ 00:06:09.678 18:42:54 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.678 [2024-07-24 18:42:54.470918] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:09.678 [2024-07-24 18:42:54.470988] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315442 ] 00:06:09.678 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.678 [2024-07-24 18:42:54.554333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.678 [2024-07-24 18:42:54.641810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.678 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:11.056 ====================================== 00:06:11.056 busy:2202232006 (cyc) 00:06:11.056 total_run_count: 3376000 00:06:11.056 tsc_hz: 2200000000 (cyc) 00:06:11.056 ====================================== 00:06:11.056 poller_cost: 652 (cyc), 296 (nsec) 00:06:11.056 00:06:11.056 real 0m1.270s 00:06:11.056 user 0m1.170s 00:06:11.056 sys 0m0.094s 00:06:11.056 18:42:55 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.056 18:42:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 ************************************ 00:06:11.056 END TEST thread_poller_perf 00:06:11.056 ************************************ 00:06:11.057 18:42:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:11.057 00:06:11.057 real 0m2.777s 00:06:11.057 user 0m2.443s 00:06:11.057 sys 0m0.339s 00:06:11.057 18:42:55 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.057 18:42:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.057 ************************************ 00:06:11.057 END TEST thread 00:06:11.057 ************************************ 00:06:11.057 18:42:55 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:11.057 18:42:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.057 18:42:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.057 18:42:55 -- common/autotest_common.sh@10 -- # set +x 00:06:11.057 ************************************ 00:06:11.057 START TEST accel 00:06:11.057 ************************************ 00:06:11.057 18:42:55 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:11.057 * Looking for test storage... 00:06:11.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:11.057 18:42:55 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:11.057 18:42:55 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:11.057 18:42:55 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.057 18:42:55 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2315761 00:06:11.057 18:42:55 accel -- accel/accel.sh@63 -- # waitforlisten 2315761 00:06:11.057 18:42:55 accel -- common/autotest_common.sh@829 -- # '[' -z 2315761 ']' 00:06:11.057 18:42:55 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.057 18:42:55 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:11.057 18:42:55 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:11.057 18:42:55 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.057 18:42:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.057 18:42:55 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.057 18:42:55 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.057 18:42:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.057 18:42:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.057 18:42:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.057 18:42:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.057 18:42:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.057 18:42:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:11.057 18:42:55 accel -- accel/accel.sh@41 -- # jq -r . 00:06:11.057 [2024-07-24 18:42:55.973671] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:11.057 [2024-07-24 18:42:55.973736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315761 ] 00:06:11.057 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.057 [2024-07-24 18:42:56.057399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.316 [2024-07-24 18:42:56.147042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.254 18:42:56 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.254 18:42:56 accel -- common/autotest_common.sh@862 -- # return 0 00:06:12.254 18:42:56 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:12.254 18:42:56 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:12.254 18:42:56 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:12.254 18:42:56 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:12.254 18:42:56 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:12.254 18:42:56 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:12.255 18:42:56 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:12.255 18:42:56 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.255 18:42:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.255 18:42:56 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # IFS== 00:06:12.255 18:42:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:12.255 18:42:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:12.255 18:42:56 accel -- accel/accel.sh@75 -- # killprocess 2315761 00:06:12.255 18:42:56 accel -- common/autotest_common.sh@948 -- # '[' -z 2315761 ']' 00:06:12.255 18:42:56 accel -- common/autotest_common.sh@952 -- # kill -0 2315761 00:06:12.255 18:42:56 accel -- common/autotest_common.sh@953 -- # uname 00:06:12.255 18:42:56 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.255 18:42:56 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2315761 00:06:12.255 18:42:57 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.255 18:42:57 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.255 18:42:57 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2315761' 00:06:12.255 killing process with pid 2315761 00:06:12.255 18:42:57 accel -- common/autotest_common.sh@967 -- # kill 2315761 00:06:12.255 18:42:57 accel -- common/autotest_common.sh@972 -- # wait 2315761 00:06:12.515 18:42:57 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:12.515 18:42:57 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:12.515 18:42:57 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:12.515 18:42:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.515 18:42:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.515 18:42:57 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:12.515 18:42:57 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:12.515 18:42:57 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:12.515 18:42:57 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.515 18:42:57 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.515 18:42:57 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.515 18:42:57 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.515 18:42:57 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.515 18:42:57 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:12.515 18:42:57 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:12.515 18:42:57 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.515 18:42:57 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:12.515 18:42:57 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:12.515 18:42:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:12.515 18:42:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.515 18:42:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.515 ************************************ 00:06:12.515 START TEST accel_missing_filename 00:06:12.515 ************************************ 00:06:12.515 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:12.515 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:12.515 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:12.515 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:12.515 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.515 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:12.515 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.515 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:12.515 18:42:57 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:12.515 18:42:57 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:12.515 18:42:57 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.515 18:42:57 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.515 18:42:57 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.515 18:42:57 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.515 18:42:57 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.515 18:42:57 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:12.515 18:42:57 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:12.515 [2024-07-24 18:42:57.521515] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:12.515 [2024-07-24 18:42:57.521572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316065 ] 00:06:12.773 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.773 [2024-07-24 18:42:57.602369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.773 [2024-07-24 18:42:57.690087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.773 [2024-07-24 18:42:57.735088] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.069 [2024-07-24 18:42:57.798191] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:13.069 A filename is required. 00:06:13.069 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:13.069 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.069 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:13.069 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:13.069 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:13.069 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.069 00:06:13.069 real 0m0.387s 00:06:13.069 user 0m0.324s 00:06:13.069 sys 0m0.134s 00:06:13.069 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.069 18:42:57 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:13.069 ************************************ 00:06:13.069 END TEST accel_missing_filename 00:06:13.069 ************************************ 00:06:13.069 18:42:57 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:13.069 18:42:57 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:13.069 18:42:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.069 18:42:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.069 ************************************ 00:06:13.069 START TEST accel_compress_verify 00:06:13.069 ************************************ 00:06:13.069 18:42:57 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:13.069 18:42:57 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:13.069 18:42:57 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:13.069 18:42:57 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:13.069 18:42:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.069 18:42:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:13.069 18:42:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.069 18:42:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:13.069 18:42:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:13.069 18:42:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:13.069 18:42:57 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.069 18:42:57 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.069 18:42:57 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.069 18:42:57 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.069 18:42:57 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.069 18:42:57 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:13.069 18:42:57 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:13.069 [2024-07-24 18:42:57.977275] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:13.069 [2024-07-24 18:42:57.977343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316179 ] 00:06:13.069 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.069 [2024-07-24 18:42:58.049214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.329 [2024-07-24 18:42:58.136459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.329 [2024-07-24 18:42:58.180774] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.329 [2024-07-24 18:42:58.243560] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:13.329 00:06:13.329 Compression does not support the verify option, aborting. 00:06:13.329 18:42:58 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:13.329 18:42:58 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.329 18:42:58 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:13.329 18:42:58 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:13.329 18:42:58 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:13.329 18:42:58 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.329 00:06:13.329 real 0m0.376s 00:06:13.329 user 0m0.285s 00:06:13.329 sys 0m0.128s 00:06:13.329 18:42:58 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.329 18:42:58 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:13.329 ************************************ 00:06:13.329 END TEST accel_compress_verify 00:06:13.329 ************************************ 00:06:13.588 18:42:58 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:13.588 18:42:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:13.588 18:42:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.588 18:42:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.588 ************************************ 00:06:13.588 START TEST accel_wrong_workload 00:06:13.588 ************************************ 00:06:13.588 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:13.588 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:13.588 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:13.589 18:42:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:13.589 18:42:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:13.589 18:42:58 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.589 18:42:58 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.589 18:42:58 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.589 18:42:58 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.589 18:42:58 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.589 18:42:58 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:13.589 18:42:58 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:13.589 Unsupported workload type: foobar 00:06:13.589 [2024-07-24 18:42:58.425338] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:13.589 accel_perf options: 00:06:13.589 [-h help message] 00:06:13.589 [-q queue depth per core] 00:06:13.589 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:13.589 [-T number of threads per core 00:06:13.589 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:13.589 [-t time in seconds] 00:06:13.589 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:13.589 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:13.589 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:13.589 [-l for compress/decompress workloads, name of uncompressed input file 00:06:13.589 [-S for crc32c workload, use this seed value (default 0) 00:06:13.589 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:13.589 [-f for fill workload, use this BYTE value (default 255) 00:06:13.589 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:13.589 [-y verify result if this switch is on] 00:06:13.589 [-a tasks to allocate per core (default: same value as -q)] 00:06:13.589 Can be used to spread operations across a wider range of memory. 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.589 00:06:13.589 real 0m0.035s 00:06:13.589 user 0m0.018s 00:06:13.589 sys 0m0.017s 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.589 18:42:58 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:13.589 ************************************ 00:06:13.589 END TEST accel_wrong_workload 00:06:13.589 ************************************ 00:06:13.589 Error: writing output failed: Broken pipe 00:06:13.589 18:42:58 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:13.589 18:42:58 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:13.589 18:42:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.589 18:42:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.589 ************************************ 00:06:13.589 START TEST accel_negative_buffers 00:06:13.589 ************************************ 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:13.589 18:42:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:13.589 18:42:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:13.589 18:42:58 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.589 18:42:58 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.589 18:42:58 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.589 18:42:58 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.589 18:42:58 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.589 18:42:58 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:13.589 18:42:58 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:13.589 -x option must be non-negative. 00:06:13.589 [2024-07-24 18:42:58.532681] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:13.589 accel_perf options: 00:06:13.589 [-h help message] 00:06:13.589 [-q queue depth per core] 00:06:13.589 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:13.589 [-T number of threads per core 00:06:13.589 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:13.589 [-t time in seconds] 00:06:13.589 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:13.589 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:13.589 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:13.589 [-l for compress/decompress workloads, name of uncompressed input file 00:06:13.589 [-S for crc32c workload, use this seed value (default 0) 00:06:13.589 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:13.589 [-f for fill workload, use this BYTE value (default 255) 00:06:13.589 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:13.589 [-y verify result if this switch is on] 00:06:13.589 [-a tasks to allocate per core (default: same value as -q)] 00:06:13.589 Can be used to spread operations across a wider range of memory. 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.589 00:06:13.589 real 0m0.035s 00:06:13.589 user 0m0.023s 00:06:13.589 sys 0m0.012s 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.589 18:42:58 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:13.589 ************************************ 00:06:13.589 END TEST accel_negative_buffers 00:06:13.589 ************************************ 00:06:13.589 Error: writing output failed: Broken pipe 00:06:13.589 18:42:58 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:13.589 18:42:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:13.589 18:42:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.589 18:42:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.850 ************************************ 00:06:13.850 START TEST accel_crc32c 00:06:13.850 ************************************ 00:06:13.850 18:42:58 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:13.850 [2024-07-24 18:42:58.635038] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:13.850 [2024-07-24 18:42:58.635096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316405 ] 00:06:13.850 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.850 [2024-07-24 18:42:58.707383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.850 [2024-07-24 18:42:58.797584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.850 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.110 18:42:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.110 18:42:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.110 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.110 18:42:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:15.047 18:42:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.047 00:06:15.047 real 0m1.387s 00:06:15.047 user 0m1.271s 00:06:15.047 sys 0m0.129s 00:06:15.047 18:42:59 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.047 18:42:59 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:15.047 ************************************ 00:06:15.047 END TEST accel_crc32c 00:06:15.047 ************************************ 00:06:15.047 18:43:00 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:15.047 18:43:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:15.047 18:43:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.047 18:43:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.307 ************************************ 00:06:15.307 START TEST accel_crc32c_C2 00:06:15.307 ************************************ 00:06:15.307 18:43:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:15.307 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.307 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:15.307 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.307 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.307 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:15.307 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:15.307 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.307 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:15.308 [2024-07-24 18:43:00.090151] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:15.308 [2024-07-24 18:43:00.090208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316683 ] 00:06:15.308 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.308 [2024-07-24 18:43:00.169317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.308 [2024-07-24 18:43:00.256240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.568 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.568 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 18:43:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.505 00:06:16.505 real 0m1.390s 00:06:16.505 user 0m1.264s 00:06:16.505 sys 0m0.139s 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.505 18:43:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:16.505 ************************************ 00:06:16.505 END TEST accel_crc32c_C2 00:06:16.505 ************************************ 00:06:16.505 18:43:01 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:16.505 18:43:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.505 18:43:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.505 18:43:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.765 ************************************ 00:06:16.765 START TEST accel_copy 00:06:16.765 ************************************ 00:06:16.765 18:43:01 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:16.765 [2024-07-24 18:43:01.550315] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:16.765 [2024-07-24 18:43:01.550380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316968 ] 00:06:16.765 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.765 [2024-07-24 18:43:01.631445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.765 [2024-07-24 18:43:01.720497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.765 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.025 18:43:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:17.962 18:43:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.962 00:06:17.962 real 0m1.394s 00:06:17.962 user 0m1.265s 00:06:17.962 sys 0m0.141s 00:06:17.962 18:43:02 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.962 18:43:02 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:17.962 ************************************ 00:06:17.962 END TEST accel_copy 00:06:17.962 ************************************ 00:06:17.962 18:43:02 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.962 18:43:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:17.962 18:43:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.962 18:43:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.220 ************************************ 00:06:18.220 START TEST accel_fill 00:06:18.220 ************************************ 00:06:18.220 18:43:02 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:18.220 18:43:02 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:18.220 [2024-07-24 18:43:03.012353] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:18.220 [2024-07-24 18:43:03.012421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317247 ] 00:06:18.220 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.220 [2024-07-24 18:43:03.094227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.220 [2024-07-24 18:43:03.181189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.220 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.220 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.220 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.220 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.220 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.220 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.220 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.220 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.220 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:18.221 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.221 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.221 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.221 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 18:43:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:19.416 18:43:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.416 00:06:19.416 real 0m1.394s 00:06:19.416 user 0m1.271s 00:06:19.416 sys 0m0.135s 00:06:19.416 18:43:04 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.416 18:43:04 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:19.416 ************************************ 00:06:19.416 END TEST accel_fill 00:06:19.416 ************************************ 00:06:19.416 18:43:04 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:19.416 18:43:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:19.416 18:43:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.416 18:43:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.675 ************************************ 00:06:19.675 START TEST accel_copy_crc32c 00:06:19.675 ************************************ 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:19.675 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:19.675 [2024-07-24 18:43:04.473213] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:19.675 [2024-07-24 18:43:04.473268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317532 ] 00:06:19.675 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.675 [2024-07-24 18:43:04.554188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.675 [2024-07-24 18:43:04.641992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.934 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.935 18:43:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.874 00:06:20.874 real 0m1.393s 00:06:20.874 user 0m1.266s 00:06:20.874 sys 0m0.141s 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.874 18:43:05 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:20.874 ************************************ 00:06:20.874 END TEST accel_copy_crc32c 00:06:20.874 ************************************ 00:06:20.874 18:43:05 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:20.874 18:43:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:20.875 18:43:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.875 18:43:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.133 ************************************ 00:06:21.133 START TEST accel_copy_crc32c_C2 00:06:21.133 ************************************ 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.133 18:43:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:21.133 [2024-07-24 18:43:05.938815] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:21.133 [2024-07-24 18:43:05.938920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317810 ] 00:06:21.133 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.133 [2024-07-24 18:43:06.053733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.392 [2024-07-24 18:43:06.148578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.392 18:43:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.769 00:06:22.769 real 0m1.440s 00:06:22.769 user 0m1.283s 00:06:22.769 sys 0m0.170s 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.769 18:43:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:22.769 ************************************ 00:06:22.769 END TEST accel_copy_crc32c_C2 00:06:22.769 ************************************ 00:06:22.769 18:43:07 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:22.769 18:43:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:22.769 18:43:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.769 18:43:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.769 ************************************ 00:06:22.769 START TEST accel_dualcast 00:06:22.769 ************************************ 00:06:22.769 18:43:07 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.769 18:43:07 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:22.770 [2024-07-24 18:43:07.438437] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:22.770 [2024-07-24 18:43:07.438493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318092 ] 00:06:22.770 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.770 [2024-07-24 18:43:07.519682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.770 [2024-07-24 18:43:07.607821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.770 18:43:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:24.149 18:43:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.149 00:06:24.149 real 0m1.394s 00:06:24.149 user 0m1.260s 00:06:24.149 sys 0m0.146s 00:06:24.149 18:43:08 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.149 18:43:08 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:24.149 ************************************ 00:06:24.149 END TEST accel_dualcast 00:06:24.149 ************************************ 00:06:24.149 18:43:08 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:24.149 18:43:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:24.149 18:43:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.149 18:43:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.149 ************************************ 00:06:24.149 START TEST accel_compare 00:06:24.149 ************************************ 00:06:24.149 18:43:08 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:24.149 18:43:08 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:24.149 [2024-07-24 18:43:08.903312] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:24.149 [2024-07-24 18:43:08.903381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318375 ] 00:06:24.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.149 [2024-07-24 18:43:08.985816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.149 [2024-07-24 18:43:09.074434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.149 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 18:43:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.529 18:43:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.530 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.530 18:43:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.530 18:43:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.530 18:43:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:25.530 18:43:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.530 00:06:25.530 real 0m1.396s 00:06:25.530 user 0m1.274s 00:06:25.530 sys 0m0.134s 00:06:25.530 18:43:10 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.530 18:43:10 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:25.530 ************************************ 00:06:25.530 END TEST accel_compare 00:06:25.530 ************************************ 00:06:25.530 18:43:10 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:25.530 18:43:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:25.530 18:43:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.530 18:43:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.530 ************************************ 00:06:25.530 START TEST accel_xor 00:06:25.530 ************************************ 00:06:25.530 18:43:10 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:25.530 18:43:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:25.530 [2024-07-24 18:43:10.364465] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:25.530 [2024-07-24 18:43:10.364519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318654 ] 00:06:25.530 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.530 [2024-07-24 18:43:10.445507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.530 [2024-07-24 18:43:10.534339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.789 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.789 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.789 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.789 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.789 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.789 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.789 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.790 18:43:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:26.769 18:43:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.769 00:06:26.769 real 0m1.393s 00:06:26.769 user 0m1.281s 00:06:26.769 sys 0m0.124s 00:06:26.769 18:43:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.769 18:43:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:26.769 ************************************ 00:06:26.769 END TEST accel_xor 00:06:26.769 ************************************ 00:06:26.769 18:43:11 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:26.769 18:43:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:26.769 18:43:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.769 18:43:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.029 ************************************ 00:06:27.029 START TEST accel_xor 00:06:27.029 ************************************ 00:06:27.029 18:43:11 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:27.029 18:43:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:27.029 [2024-07-24 18:43:11.829891] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:27.029 [2024-07-24 18:43:11.829963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318941 ] 00:06:27.029 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.029 [2024-07-24 18:43:11.901382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.029 [2024-07-24 18:43:11.988764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.029 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.289 18:43:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:28.251 18:43:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.251 00:06:28.251 real 0m1.384s 00:06:28.251 user 0m1.268s 00:06:28.251 sys 0m0.127s 00:06:28.251 18:43:13 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.251 18:43:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:28.251 ************************************ 00:06:28.251 END TEST accel_xor 00:06:28.251 ************************************ 00:06:28.251 18:43:13 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:28.251 18:43:13 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:28.251 18:43:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.251 18:43:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.251 ************************************ 00:06:28.251 START TEST accel_dif_verify 00:06:28.251 ************************************ 00:06:28.251 18:43:13 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:28.251 18:43:13 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:28.251 18:43:13 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:28.251 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.251 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.251 18:43:13 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:28.251 18:43:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:28.251 18:43:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:28.251 18:43:13 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.251 18:43:13 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.251 18:43:13 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.252 18:43:13 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.252 18:43:13 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.252 18:43:13 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:28.252 18:43:13 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:28.509 [2024-07-24 18:43:13.278440] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:28.509 [2024-07-24 18:43:13.278500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319224 ] 00:06:28.509 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.509 [2024-07-24 18:43:13.359720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.509 [2024-07-24 18:43:13.451248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.509 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.510 18:43:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.887 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:29.888 18:43:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.888 00:06:29.888 real 0m1.397s 00:06:29.888 user 0m1.276s 00:06:29.888 sys 0m0.135s 00:06:29.888 18:43:14 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.888 18:43:14 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:29.888 ************************************ 00:06:29.888 END TEST accel_dif_verify 00:06:29.888 ************************************ 00:06:29.888 18:43:14 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:29.888 18:43:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:29.888 18:43:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.888 18:43:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.888 ************************************ 00:06:29.888 START TEST accel_dif_generate 00:06:29.888 ************************************ 00:06:29.888 18:43:14 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:29.888 18:43:14 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:29.888 [2024-07-24 18:43:14.735584] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:29.888 [2024-07-24 18:43:14.735654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319503 ] 00:06:29.888 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.888 [2024-07-24 18:43:14.817457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.147 [2024-07-24 18:43:14.906044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.147 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.148 18:43:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.527 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.528 18:43:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.528 18:43:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:31.528 18:43:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.528 00:06:31.528 real 0m1.389s 00:06:31.528 user 0m1.263s 00:06:31.528 sys 0m0.139s 00:06:31.528 18:43:16 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.528 18:43:16 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:31.528 ************************************ 00:06:31.528 END TEST accel_dif_generate 00:06:31.528 ************************************ 00:06:31.528 18:43:16 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:31.528 18:43:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:31.528 18:43:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.528 18:43:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.528 ************************************ 00:06:31.528 START TEST accel_dif_generate_copy 00:06:31.528 ************************************ 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:31.528 [2024-07-24 18:43:16.194744] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:31.528 [2024-07-24 18:43:16.194801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319785 ] 00:06:31.528 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.528 [2024-07-24 18:43:16.276855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.528 [2024-07-24 18:43:16.363343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.528 18:43:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.908 00:06:32.908 real 0m1.393s 00:06:32.908 user 0m1.276s 00:06:32.908 sys 0m0.129s 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.908 18:43:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:32.908 ************************************ 00:06:32.908 END TEST accel_dif_generate_copy 00:06:32.908 ************************************ 00:06:32.908 18:43:17 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:32.908 18:43:17 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.908 18:43:17 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:32.908 18:43:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.908 18:43:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.908 ************************************ 00:06:32.908 START TEST accel_comp 00:06:32.908 ************************************ 00:06:32.908 18:43:17 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:32.908 [2024-07-24 18:43:17.653183] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:32.908 [2024-07-24 18:43:17.653236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320066 ] 00:06:32.908 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.908 [2024-07-24 18:43:17.734042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.908 [2024-07-24 18:43:17.820494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.908 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.909 18:43:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.909 18:43:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.909 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.909 18:43:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:34.288 18:43:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.288 00:06:34.288 real 0m1.394s 00:06:34.288 user 0m1.272s 00:06:34.288 sys 0m0.136s 00:06:34.288 18:43:19 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.288 18:43:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:34.288 ************************************ 00:06:34.288 END TEST accel_comp 00:06:34.288 ************************************ 00:06:34.288 18:43:19 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.288 18:43:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:34.288 18:43:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.288 18:43:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.288 ************************************ 00:06:34.288 START TEST accel_decomp 00:06:34.288 ************************************ 00:06:34.289 18:43:19 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:34.289 18:43:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:34.289 [2024-07-24 18:43:19.123374] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:34.289 [2024-07-24 18:43:19.123476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320351 ] 00:06:34.289 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.289 [2024-07-24 18:43:19.239505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.549 [2024-07-24 18:43:19.326042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.549 18:43:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.929 18:43:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.929 00:06:35.929 real 0m1.435s 00:06:35.929 user 0m1.288s 00:06:35.929 sys 0m0.161s 00:06:35.929 18:43:20 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.929 18:43:20 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:35.929 ************************************ 00:06:35.929 END TEST accel_decomp 00:06:35.929 ************************************ 00:06:35.929 18:43:20 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.929 18:43:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:35.929 18:43:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.929 18:43:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.929 ************************************ 00:06:35.929 START TEST accel_decomp_full 00:06:35.929 ************************************ 00:06:35.929 18:43:20 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.929 18:43:20 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:35.929 18:43:20 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:35.929 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.929 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.929 18:43:20 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.929 18:43:20 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:35.929 18:43:20 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.929 18:43:20 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:35.930 [2024-07-24 18:43:20.616679] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:35.930 [2024-07-24 18:43:20.616730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320631 ] 00:06:35.930 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.930 [2024-07-24 18:43:20.697432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.930 [2024-07-24 18:43:20.783869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.930 18:43:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.310 18:43:21 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.310 00:06:37.310 real 0m1.409s 00:06:37.310 user 0m1.287s 00:06:37.310 sys 0m0.135s 00:06:37.310 18:43:21 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.310 18:43:21 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:37.310 ************************************ 00:06:37.310 END TEST accel_decomp_full 00:06:37.310 ************************************ 00:06:37.310 18:43:22 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.310 18:43:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:37.310 18:43:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.310 18:43:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.310 ************************************ 00:06:37.310 START TEST accel_decomp_mcore 00:06:37.310 ************************************ 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:37.311 [2024-07-24 18:43:22.094181] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:37.311 [2024-07-24 18:43:22.094234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320916 ] 00:06:37.311 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.311 [2024-07-24 18:43:22.175488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.311 [2024-07-24 18:43:22.265777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.311 [2024-07-24 18:43:22.265891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.311 [2024-07-24 18:43:22.266002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.311 [2024-07-24 18:43:22.266003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.570 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.571 18:43:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.521 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.522 00:06:38.522 real 0m1.410s 00:06:38.522 user 0m4.635s 00:06:38.522 sys 0m0.147s 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.522 18:43:23 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:38.522 ************************************ 00:06:38.522 END TEST accel_decomp_mcore 00:06:38.522 ************************************ 00:06:38.522 18:43:23 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.522 18:43:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:38.522 18:43:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.522 18:43:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.784 ************************************ 00:06:38.784 START TEST accel_decomp_full_mcore 00:06:38.784 ************************************ 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:38.784 [2024-07-24 18:43:23.568418] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:38.784 [2024-07-24 18:43:23.568469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321198 ] 00:06:38.784 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.784 [2024-07-24 18:43:23.649404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.784 [2024-07-24 18:43:23.739745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.784 [2024-07-24 18:43:23.739857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.784 [2024-07-24 18:43:23.739968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.784 [2024-07-24 18:43:23.739969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.784 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.043 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.043 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.043 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.043 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.043 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:39.043 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.043 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.043 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.044 18:43:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.981 00:06:39.981 real 0m1.432s 00:06:39.981 user 0m4.725s 00:06:39.981 sys 0m0.149s 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.981 18:43:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:39.981 ************************************ 00:06:39.982 END TEST accel_decomp_full_mcore 00:06:39.982 ************************************ 00:06:40.242 18:43:25 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.242 18:43:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:40.242 18:43:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.242 18:43:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.242 ************************************ 00:06:40.242 START TEST accel_decomp_mthread 00:06:40.242 ************************************ 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:40.242 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:40.242 [2024-07-24 18:43:25.072601] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:40.242 [2024-07-24 18:43:25.072680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321485 ] 00:06:40.242 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.242 [2024-07-24 18:43:25.153019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.242 [2024-07-24 18:43:25.239472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.538 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.539 18:43:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.477 00:06:41.477 real 0m1.398s 00:06:41.477 user 0m1.275s 00:06:41.477 sys 0m0.136s 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.477 18:43:26 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:41.477 ************************************ 00:06:41.477 END TEST accel_decomp_mthread 00:06:41.477 ************************************ 00:06:41.477 18:43:26 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.477 18:43:26 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:41.477 18:43:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.477 18:43:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.736 ************************************ 00:06:41.736 START TEST accel_decomp_full_mthread 00:06:41.736 ************************************ 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:41.736 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:41.736 [2024-07-24 18:43:26.536116] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:41.736 [2024-07-24 18:43:26.536168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321765 ] 00:06:41.736 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.736 [2024-07-24 18:43:26.617195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.736 [2024-07-24 18:43:26.703920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.995 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.996 18:43:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.192 00:06:43.192 real 0m1.435s 00:06:43.192 user 0m1.306s 00:06:43.192 sys 0m0.143s 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.192 18:43:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:43.192 ************************************ 00:06:43.192 END TEST accel_decomp_full_mthread 00:06:43.192 ************************************ 00:06:43.192 18:43:27 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:43.192 18:43:27 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:43.192 18:43:27 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:43.192 18:43:27 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:43.192 18:43:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.192 18:43:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.192 18:43:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.192 18:43:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.192 18:43:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.192 18:43:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.192 18:43:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.192 18:43:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:43.192 18:43:27 accel -- accel/accel.sh@41 -- # jq -r . 00:06:43.192 ************************************ 00:06:43.192 START TEST accel_dif_functional_tests 00:06:43.192 ************************************ 00:06:43.192 18:43:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:43.192 [2024-07-24 18:43:28.056236] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:43.192 [2024-07-24 18:43:28.056286] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322053 ] 00:06:43.192 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.192 [2024-07-24 18:43:28.137142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.451 [2024-07-24 18:43:28.226466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.451 [2024-07-24 18:43:28.226580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.451 [2024-07-24 18:43:28.226580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.451 00:06:43.451 00:06:43.451 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.451 http://cunit.sourceforge.net/ 00:06:43.451 00:06:43.451 00:06:43.451 Suite: accel_dif 00:06:43.451 Test: verify: DIF generated, GUARD check ...passed 00:06:43.451 Test: verify: DIF generated, APPTAG check ...passed 00:06:43.451 Test: verify: DIF generated, REFTAG check ...passed 00:06:43.451 Test: verify: DIF not generated, GUARD check ...[2024-07-24 18:43:28.301737] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:43.451 passed 00:06:43.451 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 18:43:28.301800] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:43.451 passed 00:06:43.451 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 18:43:28.301834] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:43.451 passed 00:06:43.451 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:43.451 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 18:43:28.301904] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:43.451 passed 00:06:43.451 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:43.451 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:43.451 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:43.451 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 18:43:28.302059] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:43.451 passed 00:06:43.452 Test: verify copy: DIF generated, GUARD check ...passed 00:06:43.452 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:43.452 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:43.452 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 18:43:28.302228] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:43.452 passed 00:06:43.452 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 18:43:28.302262] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:43.452 passed 00:06:43.452 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 18:43:28.302295] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:43.452 passed 00:06:43.452 Test: generate copy: DIF generated, GUARD check ...passed 00:06:43.452 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:43.452 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:43.452 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:43.452 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:43.452 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:43.452 Test: generate copy: iovecs-len validate ...[2024-07-24 18:43:28.302562] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:43.452 passed 00:06:43.452 Test: generate copy: buffer alignment validate ...passed 00:06:43.452 00:06:43.452 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.452 suites 1 1 n/a 0 0 00:06:43.452 tests 26 26 26 0 0 00:06:43.452 asserts 115 115 115 0 n/a 00:06:43.452 00:06:43.452 Elapsed time = 0.004 seconds 00:06:43.711 00:06:43.711 real 0m0.481s 00:06:43.711 user 0m0.682s 00:06:43.711 sys 0m0.180s 00:06:43.711 18:43:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.711 18:43:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 ************************************ 00:06:43.711 END TEST accel_dif_functional_tests 00:06:43.711 ************************************ 00:06:43.711 00:06:43.711 real 0m32.704s 00:06:43.711 user 0m36.176s 00:06:43.711 sys 0m4.895s 00:06:43.711 18:43:28 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.711 18:43:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 ************************************ 00:06:43.711 END TEST accel 00:06:43.711 ************************************ 00:06:43.711 18:43:28 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:43.711 18:43:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.711 18:43:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.711 18:43:28 -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 ************************************ 00:06:43.711 START TEST accel_rpc 00:06:43.711 ************************************ 00:06:43.711 18:43:28 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:43.711 * Looking for test storage... 00:06:43.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:43.711 18:43:28 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.711 18:43:28 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2322360 00:06:43.711 18:43:28 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2322360 00:06:43.711 18:43:28 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:43.711 18:43:28 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2322360 ']' 00:06:43.711 18:43:28 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.711 18:43:28 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.711 18:43:28 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.711 18:43:28 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.711 18:43:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.971 [2024-07-24 18:43:28.751438] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:43.971 [2024-07-24 18:43:28.751502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322360 ] 00:06:43.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.971 [2024-07-24 18:43:28.834178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.971 [2024-07-24 18:43:28.924885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.908 18:43:29 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.908 18:43:29 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:44.908 18:43:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:44.908 18:43:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:44.908 18:43:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:44.908 18:43:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:44.908 18:43:29 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:44.908 18:43:29 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.908 18:43:29 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.908 18:43:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.908 ************************************ 00:06:44.908 START TEST accel_assign_opcode 00:06:44.908 ************************************ 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.908 [2024-07-24 18:43:29.723310] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.908 [2024-07-24 18:43:29.731327] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.908 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.168 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.168 18:43:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:45.168 18:43:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:45.168 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.168 18:43:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:45.168 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.168 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.168 software 00:06:45.168 00:06:45.168 real 0m0.259s 00:06:45.168 user 0m0.050s 00:06:45.168 sys 0m0.010s 00:06:45.168 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.168 18:43:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.168 ************************************ 00:06:45.168 END TEST accel_assign_opcode 00:06:45.168 ************************************ 00:06:45.168 18:43:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2322360 00:06:45.168 18:43:30 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2322360 ']' 00:06:45.168 18:43:30 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2322360 00:06:45.168 18:43:30 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:45.168 18:43:30 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.168 18:43:30 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2322360 00:06:45.168 18:43:30 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.168 18:43:30 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.168 18:43:30 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2322360' 00:06:45.168 killing process with pid 2322360 00:06:45.168 18:43:30 accel_rpc -- common/autotest_common.sh@967 -- # kill 2322360 00:06:45.168 18:43:30 accel_rpc -- common/autotest_common.sh@972 -- # wait 2322360 00:06:45.427 00:06:45.427 real 0m1.802s 00:06:45.427 user 0m1.952s 00:06:45.427 sys 0m0.506s 00:06:45.427 18:43:30 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.427 18:43:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.427 ************************************ 00:06:45.427 END TEST accel_rpc 00:06:45.427 ************************************ 00:06:45.427 18:43:30 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.427 18:43:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.427 18:43:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.427 18:43:30 -- common/autotest_common.sh@10 -- # set +x 00:06:45.687 ************************************ 00:06:45.687 START TEST app_cmdline 00:06:45.687 ************************************ 00:06:45.687 18:43:30 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.687 * Looking for test storage... 00:06:45.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.687 18:43:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.687 18:43:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2322705 00:06:45.687 18:43:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2322705 00:06:45.687 18:43:30 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.687 18:43:30 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2322705 ']' 00:06:45.687 18:43:30 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.687 18:43:30 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.687 18:43:30 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.687 18:43:30 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.687 18:43:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.687 [2024-07-24 18:43:30.611208] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:45.687 [2024-07-24 18:43:30.611269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322705 ] 00:06:45.687 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.687 [2024-07-24 18:43:30.692598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.947 [2024-07-24 18:43:30.783581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.884 18:43:31 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.884 18:43:31 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:46.884 18:43:31 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:46.884 { 00:06:46.884 "version": "SPDK v24.09-pre git sha1 0bb5c21e2", 00:06:46.884 "fields": { 00:06:46.884 "major": 24, 00:06:46.884 "minor": 9, 00:06:46.884 "patch": 0, 00:06:46.884 "suffix": "-pre", 00:06:46.884 "commit": "0bb5c21e2" 00:06:46.884 } 00:06:46.884 } 00:06:46.884 18:43:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:46.884 18:43:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:46.884 18:43:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:46.884 18:43:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:46.884 18:43:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:46.884 18:43:31 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.884 18:43:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:46.884 18:43:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.884 18:43:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:46.884 18:43:31 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.884 18:43:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:46.885 18:43:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:46.885 18:43:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:46.885 18:43:31 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.144 request: 00:06:47.144 { 00:06:47.144 "method": "env_dpdk_get_mem_stats", 00:06:47.144 "req_id": 1 00:06:47.144 } 00:06:47.144 Got JSON-RPC error response 00:06:47.144 response: 00:06:47.144 { 00:06:47.144 "code": -32601, 00:06:47.144 "message": "Method not found" 00:06:47.144 } 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.144 18:43:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2322705 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2322705 ']' 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2322705 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2322705 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2322705' 00:06:47.144 killing process with pid 2322705 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@967 -- # kill 2322705 00:06:47.144 18:43:32 app_cmdline -- common/autotest_common.sh@972 -- # wait 2322705 00:06:47.713 00:06:47.713 real 0m2.001s 00:06:47.713 user 0m2.578s 00:06:47.713 sys 0m0.471s 00:06:47.713 18:43:32 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.713 18:43:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.713 ************************************ 00:06:47.713 END TEST app_cmdline 00:06:47.713 ************************************ 00:06:47.713 18:43:32 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.713 18:43:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.713 18:43:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.713 18:43:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.713 ************************************ 00:06:47.713 START TEST version 00:06:47.713 ************************************ 00:06:47.713 18:43:32 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.713 * Looking for test storage... 00:06:47.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:47.713 18:43:32 version -- app/version.sh@17 -- # get_header_version major 00:06:47.713 18:43:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.714 18:43:32 version -- app/version.sh@14 -- # cut -f2 00:06:47.714 18:43:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.714 18:43:32 version -- app/version.sh@17 -- # major=24 00:06:47.714 18:43:32 version -- app/version.sh@18 -- # get_header_version minor 00:06:47.714 18:43:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.714 18:43:32 version -- app/version.sh@14 -- # cut -f2 00:06:47.714 18:43:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.714 18:43:32 version -- app/version.sh@18 -- # minor=9 00:06:47.714 18:43:32 version -- app/version.sh@19 -- # get_header_version patch 00:06:47.714 18:43:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.714 18:43:32 version -- app/version.sh@14 -- # cut -f2 00:06:47.714 18:43:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.714 18:43:32 version -- app/version.sh@19 -- # patch=0 00:06:47.714 18:43:32 version -- app/version.sh@20 -- # get_header_version suffix 00:06:47.714 18:43:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.714 18:43:32 version -- app/version.sh@14 -- # cut -f2 00:06:47.714 18:43:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.714 18:43:32 version -- app/version.sh@20 -- # suffix=-pre 00:06:47.714 18:43:32 version -- app/version.sh@22 -- # version=24.9 00:06:47.714 18:43:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:47.714 18:43:32 version -- app/version.sh@28 -- # version=24.9rc0 00:06:47.714 18:43:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:47.714 18:43:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:47.714 18:43:32 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:47.714 18:43:32 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:47.714 00:06:47.714 real 0m0.171s 00:06:47.714 user 0m0.084s 00:06:47.714 sys 0m0.126s 00:06:47.714 18:43:32 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.714 18:43:32 version -- common/autotest_common.sh@10 -- # set +x 00:06:47.714 ************************************ 00:06:47.714 END TEST version 00:06:47.714 ************************************ 00:06:47.973 18:43:32 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:47.973 18:43:32 -- spdk/autotest.sh@198 -- # uname -s 00:06:47.973 18:43:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:47.973 18:43:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:47.973 18:43:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:47.973 18:43:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:47.973 18:43:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:47.973 18:43:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:47.973 18:43:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:47.973 18:43:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.973 18:43:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:47.973 18:43:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:47.973 18:43:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:47.973 18:43:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:47.973 18:43:32 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:47.973 18:43:32 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:47.973 18:43:32 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.973 18:43:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.973 18:43:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.973 18:43:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.973 ************************************ 00:06:47.973 START TEST nvmf_tcp 00:06:47.973 ************************************ 00:06:47.974 18:43:32 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.974 * Looking for test storage... 00:06:47.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:47.974 18:43:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:47.974 18:43:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.974 18:43:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:47.974 18:43:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.974 18:43:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.974 18:43:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.974 ************************************ 00:06:47.974 START TEST nvmf_target_core 00:06:47.974 ************************************ 00:06:47.974 18:43:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:48.233 * Looking for test storage... 00:06:48.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.233 ************************************ 00:06:48.233 START TEST nvmf_abort 00:06:48.233 ************************************ 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:48.233 * Looking for test storage... 00:06:48.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.233 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:48.234 18:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:54.803 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:54.803 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:54.803 Found net devices under 0000:af:00.0: cvl_0_0 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:54.803 Found net devices under 0000:af:00.1: cvl_0_1 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.803 18:43:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.803 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:54.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:06:54.804 00:06:54.804 --- 10.0.0.2 ping statistics --- 00:06:54.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.804 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:06:54.804 00:06:54.804 --- 10.0.0.1 ping statistics --- 00:06:54.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.804 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2326553 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2326553 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2326553 ']' 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.804 18:43:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.804 [2024-07-24 18:43:39.257921] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:06:54.804 [2024-07-24 18:43:39.257975] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.804 [2024-07-24 18:43:39.344860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.804 [2024-07-24 18:43:39.452120] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.804 [2024-07-24 18:43:39.452166] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.804 [2024-07-24 18:43:39.452179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.804 [2024-07-24 18:43:39.452190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.804 [2024-07-24 18:43:39.452200] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.804 [2024-07-24 18:43:39.452319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.804 [2024-07-24 18:43:39.452431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.804 [2024-07-24 18:43:39.452434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.372 [2024-07-24 18:43:40.260156] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.372 Malloc0 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.372 Delay0 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.372 [2024-07-24 18:43:40.356284] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.372 18:43:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:55.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.632 [2024-07-24 18:43:40.444776] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:58.181 Initializing NVMe Controllers 00:06:58.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:58.181 controller IO queue size 128 less than required 00:06:58.181 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:58.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:58.181 Initialization complete. Launching workers. 00:06:58.181 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29336 00:06:58.181 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29397, failed to submit 62 00:06:58.181 success 29340, unsuccess 57, failed 0 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:58.181 rmmod nvme_tcp 00:06:58.181 rmmod nvme_fabrics 00:06:58.181 rmmod nvme_keyring 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2326553 ']' 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2326553 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2326553 ']' 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2326553 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2326553 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2326553' 00:06:58.181 killing process with pid 2326553 00:06:58.181 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2326553 00:06:58.182 18:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2326553 00:06:58.182 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:58.182 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:58.182 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:58.182 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:58.182 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:58.182 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.182 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.182 18:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.086 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:00.345 00:07:00.345 real 0m11.988s 00:07:00.345 user 0m14.147s 00:07:00.345 sys 0m5.437s 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:00.346 ************************************ 00:07:00.346 END TEST nvmf_abort 00:07:00.346 ************************************ 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.346 ************************************ 00:07:00.346 START TEST nvmf_ns_hotplug_stress 00:07:00.346 ************************************ 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:00.346 * Looking for test storage... 00:07:00.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:00.346 18:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:06.921 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:06.921 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:06.921 Found net devices under 0000:af:00.0: cvl_0_0 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.921 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:06.922 Found net devices under 0000:af:00.1: cvl_0_1 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.922 18:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:06.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:07:06.922 00:07:06.922 --- 10.0.0.2 ping statistics --- 00:07:06.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.922 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:07:06.922 00:07:06.922 --- 10.0.0.1 ping statistics --- 00:07:06.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.922 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2330835 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2330835 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2330835 ']' 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.922 18:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:06.922 [2024-07-24 18:43:51.348278] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:07:06.922 [2024-07-24 18:43:51.348339] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.922 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.922 [2024-07-24 18:43:51.435111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.922 [2024-07-24 18:43:51.540612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.922 [2024-07-24 18:43:51.540659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.922 [2024-07-24 18:43:51.540672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.922 [2024-07-24 18:43:51.540683] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.922 [2024-07-24 18:43:51.540692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.922 [2024-07-24 18:43:51.540812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.922 [2024-07-24 18:43:51.540923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.922 [2024-07-24 18:43:51.540924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.491 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.491 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:07.491 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:07.491 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.491 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:07.491 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.491 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:07.491 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:07.491 [2024-07-24 18:43:52.479259] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.750 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:07.750 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.010 [2024-07-24 18:43:52.838713] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.010 18:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:08.269 18:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:08.269 Malloc0 00:07:08.269 18:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:08.526 Delay0 00:07:08.526 18:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.784 18:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:09.042 NULL1 00:07:09.042 18:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:09.302 18:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:09.302 18:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2331392 00:07:09.302 18:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:09.302 18:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.302 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.681 Read completed with error (sct=0, sc=11) 00:07:10.681 18:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.681 18:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:10.681 18:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:10.940 true 00:07:10.940 18:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:10.940 18:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.879 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.879 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:11.879 18:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:12.137 true 00:07:12.137 18:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:12.137 18:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.396 18:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.655 18:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:12.655 18:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:12.655 true 00:07:12.655 18:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:12.655 18:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.034 18:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.034 18:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:14.034 18:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:14.034 true 00:07:14.034 18:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:14.034 18:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.293 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.554 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:14.554 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:14.847 true 00:07:14.847 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:14.847 18:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.793 18:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.052 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.052 18:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:16.052 18:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:16.311 true 00:07:16.311 18:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:16.311 18:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.248 18:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.248 18:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:17.248 18:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:17.506 true 00:07:17.506 18:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:17.506 18:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.765 18:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.023 18:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:18.023 18:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:18.282 true 00:07:18.282 18:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:18.282 18:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.224 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.224 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:19.224 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:19.483 true 00:07:19.483 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:19.483 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.741 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.741 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:19.741 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:20.000 true 00:07:20.000 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:20.000 18:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.378 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.378 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:21.378 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:21.638 true 00:07:21.638 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:21.638 18:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.576 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.576 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:22.576 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:22.835 true 00:07:22.835 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:22.835 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.835 18:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.094 18:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:23.094 18:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:23.353 true 00:07:23.353 18:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:23.353 18:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.731 18:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.731 18:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:24.731 18:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:24.990 true 00:07:24.990 18:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:24.990 18:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.249 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.507 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:25.507 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:25.767 true 00:07:25.767 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:25.767 18:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.704 18:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.962 18:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:26.962 18:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:26.962 true 00:07:26.962 18:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:26.962 18:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.223 18:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.481 18:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:27.481 18:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:27.481 true 00:07:27.739 18:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:27.739 18:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.715 18:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.972 18:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:28.972 18:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:29.230 true 00:07:29.230 18:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:29.230 18:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.164 18:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.164 18:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:30.164 18:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:30.447 true 00:07:30.447 18:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:30.447 18:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.705 18:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.963 18:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:30.963 18:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:30.963 true 00:07:30.963 18:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:30.963 18:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.901 18:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.159 18:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:32.159 18:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:32.417 true 00:07:32.417 18:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:32.417 18:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.417 18:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.675 18:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:32.675 18:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:32.933 true 00:07:32.933 18:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:32.933 18:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.933 18:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.191 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:33.191 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:33.448 true 00:07:33.448 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:33.448 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.706 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.965 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:33.965 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:33.965 true 00:07:33.965 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:33.965 18:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.223 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.223 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:34.223 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:34.481 true 00:07:34.481 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:34.481 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.739 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.998 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:34.998 18:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:34.998 true 00:07:35.257 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:35.257 18:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.192 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.450 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:36.450 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:36.708 true 00:07:36.708 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:36.708 18:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.644 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.644 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:37.644 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:37.903 true 00:07:37.903 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:37.903 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.162 18:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.162 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:38.162 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:38.420 true 00:07:38.420 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:38.420 18:44:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.796 18:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.796 Initializing NVMe Controllers 00:07:39.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:39.796 Controller IO queue size 128, less than required. 00:07:39.796 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:39.796 Controller IO queue size 128, less than required. 00:07:39.796 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:39.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:39.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:39.796 Initialization complete. Launching workers. 00:07:39.796 ======================================================== 00:07:39.796 Latency(us) 00:07:39.796 Device Information : IOPS MiB/s Average min max 00:07:39.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 800.87 0.39 97742.72 4679.45 1024309.02 00:07:39.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5043.30 2.46 25380.69 9826.82 575084.04 00:07:39.796 ======================================================== 00:07:39.796 Total : 5844.17 2.85 35296.96 4679.45 1024309.02 00:07:39.796 00:07:39.797 18:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:39.797 18:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:39.797 true 00:07:40.054 18:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2331392 00:07:40.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2331392) - No such process 00:07:40.054 18:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2331392 00:07:40.054 18:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.347 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.347 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:40.347 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:40.347 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:40.347 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:40.347 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:40.607 null0 00:07:40.607 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:40.607 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:40.607 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:40.866 null1 00:07:40.866 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:40.866 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:40.866 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:41.125 null2 00:07:41.125 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:41.125 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:41.125 18:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:41.125 null3 00:07:41.384 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:41.384 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:41.384 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:41.384 null4 00:07:41.384 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:41.384 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:41.384 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:41.642 null5 00:07:41.642 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:41.642 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:41.642 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:41.642 null6 00:07:41.642 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:41.642 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:41.642 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:41.902 null7 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:41.902 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.903 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:41.903 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:41.903 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.903 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:41.903 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:41.903 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2337318 2337320 2337323 2337326 2337329 2337332 2337335 2337338 00:07:41.903 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:41.903 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:41.903 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.903 18:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.162 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.162 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.162 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.162 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.162 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.162 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.162 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.421 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.422 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.681 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.681 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.681 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.681 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.681 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:42.681 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.681 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.681 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.681 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.681 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.940 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:43.199 18:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.199 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.458 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.459 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:43.717 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.975 18:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.233 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.491 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:44.748 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.006 18:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:45.264 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.521 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:45.779 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:46.038 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.038 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.038 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:46.038 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.038 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.038 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:46.038 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:46.039 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.039 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.039 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:46.039 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.039 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.039 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:46.039 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:46.039 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.039 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.039 18:44:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:46.298 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.557 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.817 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.075 18:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:47.334 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.334 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.334 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:47.334 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.334 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:47.593 rmmod nvme_tcp 00:07:47.593 rmmod nvme_fabrics 00:07:47.593 rmmod nvme_keyring 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2330835 ']' 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2330835 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2330835 ']' 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2330835 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2330835 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2330835' 00:07:47.593 killing process with pid 2330835 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2330835 00:07:47.593 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2330835 00:07:47.852 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.852 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.852 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.852 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.852 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.852 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.852 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.852 18:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:50.391 00:07:50.391 real 0m49.656s 00:07:50.391 user 3m26.614s 00:07:50.391 sys 0m16.278s 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:50.391 ************************************ 00:07:50.391 END TEST nvmf_ns_hotplug_stress 00:07:50.391 ************************************ 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.391 ************************************ 00:07:50.391 START TEST nvmf_delete_subsystem 00:07:50.391 ************************************ 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:50.391 * Looking for test storage... 00:07:50.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.391 18:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:50.391 18:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.729 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.729 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:55.729 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:55.729 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:55.729 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:55.729 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:55.729 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:55.729 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:55.729 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:55.729 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:55.730 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:55.730 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:55.730 Found net devices under 0000:af:00.0: cvl_0_0 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:55.730 Found net devices under 0000:af:00.1: cvl_0_1 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.730 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:55.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:07:55.989 00:07:55.989 --- 10.0.0.2 ping statistics --- 00:07:55.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.989 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:07:55.989 00:07:55.989 --- 10.0.0.1 ping statistics --- 00:07:55.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.989 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:55.989 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2342197 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2342197 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2342197 ']' 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.990 18:44:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:56.249 [2024-07-24 18:44:41.014992] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:07:56.249 [2024-07-24 18:44:41.015046] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.249 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.249 [2024-07-24 18:44:41.103271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:56.249 [2024-07-24 18:44:41.190239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.249 [2024-07-24 18:44:41.190283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.249 [2024-07-24 18:44:41.190293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.249 [2024-07-24 18:44:41.190302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.249 [2024-07-24 18:44:41.190310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.249 [2024-07-24 18:44:41.190365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.249 [2024-07-24 18:44:41.190368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.186 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.186 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:07:57.186 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.186 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.186 18:44:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.186 [2024-07-24 18:44:42.012679] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.186 [2024-07-24 18:44:42.033221] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.186 NULL1 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.186 Delay0 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2342477 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:57.186 18:44:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:57.186 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.186 [2024-07-24 18:44:42.134452] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:59.089 18:44:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.089 18:44:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.089 18:44:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.347 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 [2024-07-24 18:44:44.322911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ad4000c00 is same with the state(6) to be set 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 starting I/O failed: -6 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Write completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.348 Read completed with error (sct=0, sc=8) 00:07:59.349 starting I/O failed: -6 00:07:59.349 Write completed with error (sct=0, sc=8) 00:07:59.349 Read completed with error (sct=0, sc=8) 00:07:59.349 Read completed with error (sct=0, sc=8) 00:07:59.349 Read completed with error (sct=0, sc=8) 00:07:59.349 starting I/O failed: -6 00:07:59.349 Write completed with error (sct=0, sc=8) 00:07:59.349 Write completed with error (sct=0, sc=8) 00:07:59.349 [2024-07-24 18:44:44.324782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f0e90 is same with the state(6) to be set 00:08:00.283 [2024-07-24 18:44:45.273887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf500 is same with the state(6) to be set 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Write completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Write completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Write completed with error (sct=0, sc=8) 00:08:00.542 Write completed with error (sct=0, sc=8) 00:08:00.542 Write completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 [2024-07-24 18:44:45.321188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ad400d330 is same with the state(6) to be set 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Write completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Write completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Write completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.542 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 [2024-07-24 18:44:45.325753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f3650 is same with the state(6) to be set 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 [2024-07-24 18:44:45.326134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efd00 is same with the state(6) to be set 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Write completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 Read completed with error (sct=0, sc=8) 00:08:00.543 [2024-07-24 18:44:45.326471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f0cb0 is same with the state(6) to be set 00:08:00.543 Initializing NVMe Controllers 00:08:00.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:00.543 Controller IO queue size 128, less than required. 00:08:00.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:00.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:00.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:00.543 Initialization complete. Launching workers. 00:08:00.543 ======================================================== 00:08:00.543 Latency(us) 00:08:00.543 Device Information : IOPS MiB/s Average min max 00:08:00.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.42 0.09 966075.84 1371.37 1019819.72 00:08:00.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.69 0.07 892185.63 812.99 1018875.63 00:08:00.543 ======================================================== 00:08:00.543 Total : 326.12 0.16 931705.69 812.99 1019819.72 00:08:00.543 00:08:00.543 [2024-07-24 18:44:45.327500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cf500 (9): Bad file descriptor 00:08:00.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:00.543 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.543 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:00.543 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2342477 00:08:00.543 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2342477 00:08:01.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2342477) - No such process 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2342477 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2342477 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2342477 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.111 [2024-07-24 18:44:45.852713] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2343170 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2343170 00:08:01.111 18:44:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:01.111 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.111 [2024-07-24 18:44:45.929087] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:01.370 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:01.370 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2343170 00:08:01.370 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:01.938 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:01.938 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2343170 00:08:01.938 18:44:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:02.506 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:02.506 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2343170 00:08:02.506 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:03.073 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:03.073 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2343170 00:08:03.073 18:44:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:03.639 18:44:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:03.639 18:44:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2343170 00:08:03.639 18:44:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:03.896 18:44:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:03.896 18:44:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2343170 00:08:03.896 18:44:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:04.155 Initializing NVMe Controllers 00:08:04.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:04.155 Controller IO queue size 128, less than required. 00:08:04.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:04.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:04.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:04.155 Initialization complete. Launching workers. 00:08:04.155 ======================================================== 00:08:04.155 Latency(us) 00:08:04.155 Device Information : IOPS MiB/s Average min max 00:08:04.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005133.62 1000272.48 1041093.09 00:08:04.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006987.43 1000269.85 1020334.59 00:08:04.155 ======================================================== 00:08:04.155 Total : 256.00 0.12 1006060.53 1000269.85 1041093.09 00:08:04.155 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2343170 00:08:04.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2343170) - No such process 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2343170 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.414 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.414 rmmod nvme_tcp 00:08:04.673 rmmod nvme_fabrics 00:08:04.673 rmmod nvme_keyring 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2342197 ']' 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2342197 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2342197 ']' 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2342197 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2342197 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2342197' 00:08:04.673 killing process with pid 2342197 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2342197 00:08:04.673 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2342197 00:08:04.931 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:04.931 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:04.931 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:04.932 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:04.932 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:04.932 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.932 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.932 18:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.833 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:06.833 00:08:06.833 real 0m16.886s 00:08:06.833 user 0m31.139s 00:08:06.833 sys 0m5.372s 00:08:06.833 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.833 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.833 ************************************ 00:08:06.833 END TEST nvmf_delete_subsystem 00:08:06.833 ************************************ 00:08:06.833 18:44:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:06.833 18:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.833 18:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.833 18:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.092 ************************************ 00:08:07.092 START TEST nvmf_host_management 00:08:07.092 ************************************ 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:07.092 * Looking for test storage... 00:08:07.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:07.092 18:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.664 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:13.665 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:13.665 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:13.665 Found net devices under 0000:af:00.0: cvl_0_0 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:13.665 Found net devices under 0000:af:00.1: cvl_0_1 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:08:13.665 00:08:13.665 --- 10.0.0.2 ping statistics --- 00:08:13.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.665 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:08:13.665 00:08:13.665 --- 10.0.0.1 ping statistics --- 00:08:13.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.665 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.665 18:44:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2347521 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2347521 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2347521 ']' 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.665 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.666 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.666 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.666 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.666 [2024-07-24 18:44:58.069722] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:08:13.666 [2024-07-24 18:44:58.069787] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.666 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.666 [2024-07-24 18:44:58.157458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.666 [2024-07-24 18:44:58.265672] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.666 [2024-07-24 18:44:58.265717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.666 [2024-07-24 18:44:58.265730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.666 [2024-07-24 18:44:58.265741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.666 [2024-07-24 18:44:58.265752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.666 [2024-07-24 18:44:58.265876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.666 [2024-07-24 18:44:58.265908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.666 [2024-07-24 18:44:58.266019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:13.666 [2024-07-24 18:44:58.266021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.233 [2024-07-24 18:44:58.985102] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.233 18:44:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.233 Malloc0 00:08:14.233 [2024-07-24 18:44:59.062096] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2347682 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2347682 /var/tmp/bdevperf.sock 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2347682 ']' 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:14.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:14.233 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:14.233 { 00:08:14.233 "params": { 00:08:14.233 "name": "Nvme$subsystem", 00:08:14.233 "trtype": "$TEST_TRANSPORT", 00:08:14.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.233 "adrfam": "ipv4", 00:08:14.233 "trsvcid": "$NVMF_PORT", 00:08:14.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.233 "hdgst": ${hdgst:-false}, 00:08:14.233 "ddgst": ${ddgst:-false} 00:08:14.234 }, 00:08:14.234 "method": "bdev_nvme_attach_controller" 00:08:14.234 } 00:08:14.234 EOF 00:08:14.234 )") 00:08:14.234 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:14.234 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:14.234 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:14.234 18:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:14.234 "params": { 00:08:14.234 "name": "Nvme0", 00:08:14.234 "trtype": "tcp", 00:08:14.234 "traddr": "10.0.0.2", 00:08:14.234 "adrfam": "ipv4", 00:08:14.234 "trsvcid": "4420", 00:08:14.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:14.234 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:14.234 "hdgst": false, 00:08:14.234 "ddgst": false 00:08:14.234 }, 00:08:14.234 "method": "bdev_nvme_attach_controller" 00:08:14.234 }' 00:08:14.234 [2024-07-24 18:44:59.166146] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:08:14.234 [2024-07-24 18:44:59.166217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347682 ] 00:08:14.234 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.492 [2024-07-24 18:44:59.249905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.492 [2024-07-24 18:44:59.338302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.751 Running I/O for 10 seconds... 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=495 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 495 -ge 100 ']' 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.321 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:15.321 [2024-07-24 18:45:00.209306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:15.321 [2024-07-24 18:45:00.209351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.321 [2024-07-24 18:45:00.209364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:15.321 [2024-07-24 18:45:00.209374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.321 [2024-07-24 18:45:00.209385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:15.321 [2024-07-24 18:45:00.209395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.321 [2024-07-24 18:45:00.209406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:15.321 [2024-07-24 18:45:00.209416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.321 [2024-07-24 18:45:00.209426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904e90 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.209830] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.209914] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.209938] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.209958] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.209977] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.209997] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210025] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210046] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210067] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210086] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210105] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210124] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210143] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210162] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210182] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210201] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210220] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210239] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210258] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210277] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210296] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210316] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210334] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.321 [2024-07-24 18:45:00.210353] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210373] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210392] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210410] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210429] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210448] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210468] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210488] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210507] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210529] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210553] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210574] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210593] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210625] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210645] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210663] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210683] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210702] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210721] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210739] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210758] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210777] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210796] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210814] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210833] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210853] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210871] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210890] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210909] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210928] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210947] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210965] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.210985] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.211004] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.211023] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.211042] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.211061] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.211084] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.211103] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb19c70 is same with the state(6) to be set 00:08:15.322 [2024-07-24 18:45:00.211251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.322 [2024-07-24 18:45:00.211416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.322 [2024-07-24 18:45:00.211731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.322 [2024-07-24 18:45:00.211744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.211755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.211777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.211799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:15.323 [2024-07-24 18:45:00.211820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.211847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.211869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.211891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.211912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.211935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.211956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.211978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.211991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.323 [2024-07-24 18:45:00.212137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:15.323 [2024-07-24 18:45:00.212399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.323 [2024-07-24 18:45:00.212589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.323 [2024-07-24 18:45:00.212606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.324 [2024-07-24 18:45:00.212617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.324 [2024-07-24 18:45:00.212629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.324 [2024-07-24 18:45:00.212639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.324 [2024-07-24 18:45:00.212651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.324 [2024-07-24 18:45:00.212668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.324 [2024-07-24 18:45:00.212680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.324 [2024-07-24 18:45:00.212690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.324 [2024-07-24 18:45:00.212702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.324 [2024-07-24 18:45:00.212712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.324 [2024-07-24 18:45:00.212725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.324 [2024-07-24 18:45:00.212735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:15.324 [2024-07-24 18:45:00.212746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15f80 is same with the state(6) to be set 00:08:15.324 [2024-07-24 18:45:00.212805] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd15f80 was disconnected and freed. reset controller. 00:08:15.324 [2024-07-24 18:45:00.214183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:15.324 task offset: 65536 on job bdev=Nvme0n1 fails 00:08:15.324 00:08:15.324 Latency(us) 00:08:15.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.324 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:15.324 Job: Nvme0n1 ended in about 0.54 seconds with error 00:08:15.324 Verification LBA range: start 0x0 length 0x400 00:08:15.324 Nvme0n1 : 0.54 941.29 58.83 117.66 0.00 58774.55 9175.04 53382.05 00:08:15.324 =================================================================================================================== 00:08:15.324 Total : 941.29 58.83 117.66 0.00 58774.55 9175.04 53382.05 00:08:15.324 [2024-07-24 18:45:00.216508] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.324 [2024-07-24 18:45:00.216528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x904e90 (9): Bad file descriptor 00:08:15.324 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.324 18:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:15.583 [2024-07-24 18:45:00.360892] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2347682 00:08:16.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2347682) - No such process 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:16.518 { 00:08:16.518 "params": { 00:08:16.518 "name": "Nvme$subsystem", 00:08:16.518 "trtype": "$TEST_TRANSPORT", 00:08:16.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.518 "adrfam": "ipv4", 00:08:16.518 "trsvcid": "$NVMF_PORT", 00:08:16.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.518 "hdgst": ${hdgst:-false}, 00:08:16.518 "ddgst": ${ddgst:-false} 00:08:16.518 }, 00:08:16.518 "method": "bdev_nvme_attach_controller" 00:08:16.518 } 00:08:16.518 EOF 00:08:16.518 )") 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:16.518 18:45:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:16.518 "params": { 00:08:16.518 "name": "Nvme0", 00:08:16.518 "trtype": "tcp", 00:08:16.518 "traddr": "10.0.0.2", 00:08:16.518 "adrfam": "ipv4", 00:08:16.518 "trsvcid": "4420", 00:08:16.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:16.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:16.518 "hdgst": false, 00:08:16.518 "ddgst": false 00:08:16.518 }, 00:08:16.518 "method": "bdev_nvme_attach_controller" 00:08:16.518 }' 00:08:16.518 [2024-07-24 18:45:01.273440] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:08:16.519 [2024-07-24 18:45:01.273487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348178 ] 00:08:16.519 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.519 [2024-07-24 18:45:01.344178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.519 [2024-07-24 18:45:01.428529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.777 Running I/O for 1 seconds... 00:08:18.154 00:08:18.154 Latency(us) 00:08:18.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.154 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:18.154 Verification LBA range: start 0x0 length 0x400 00:08:18.154 Nvme0n1 : 1.01 1073.60 67.10 0.00 0.00 58510.95 9294.20 53143.74 00:08:18.154 =================================================================================================================== 00:08:18.154 Total : 1073.60 67.10 0.00 0.00 58510.95 9294.20 53143.74 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.154 18:45:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.154 rmmod nvme_tcp 00:08:18.154 rmmod nvme_fabrics 00:08:18.154 rmmod nvme_keyring 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2347521 ']' 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2347521 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2347521 ']' 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2347521 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2347521 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2347521' 00:08:18.154 killing process with pid 2347521 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2347521 00:08:18.154 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2347521 00:08:18.413 [2024-07-24 18:45:03.340694] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:18.413 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.413 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.413 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.414 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.414 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.414 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.414 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.414 18:45:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:20.945 00:08:20.945 real 0m13.594s 00:08:20.945 user 0m25.009s 00:08:20.945 sys 0m5.724s 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.945 ************************************ 00:08:20.945 END TEST nvmf_host_management 00:08:20.945 ************************************ 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.945 ************************************ 00:08:20.945 START TEST nvmf_lvol 00:08:20.945 ************************************ 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:20.945 * Looking for test storage... 00:08:20.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.945 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.946 18:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:27.514 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:27.514 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:27.514 Found net devices under 0000:af:00.0: cvl_0_0 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:27.514 Found net devices under 0000:af:00.1: cvl_0_1 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:08:27.514 00:08:27.514 --- 10.0.0.2 ping statistics --- 00:08:27.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.514 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:08:27.514 00:08:27.514 --- 10.0.0.1 ping statistics --- 00:08:27.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.514 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:27.514 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2352614 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2352614 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2352614 ']' 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.515 18:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:27.515 [2024-07-24 18:45:11.701449] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:08:27.515 [2024-07-24 18:45:11.701510] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.515 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.515 [2024-07-24 18:45:11.789360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:27.515 [2024-07-24 18:45:11.885117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.515 [2024-07-24 18:45:11.885161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.515 [2024-07-24 18:45:11.885171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.515 [2024-07-24 18:45:11.885180] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.515 [2024-07-24 18:45:11.885187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.515 [2024-07-24 18:45:11.885237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.515 [2024-07-24 18:45:11.885349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.515 [2024-07-24 18:45:11.885349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.515 18:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.515 18:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:27.515 18:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:27.515 18:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.515 18:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:27.515 18:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.515 18:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:27.515 [2024-07-24 18:45:12.419713] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.515 18:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.774 18:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:27.774 18:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:28.032 18:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:28.033 18:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:28.291 18:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:28.857 18:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=54608ad5-bfb7-4bb0-b199-b740c2ae20bd 00:08:28.857 18:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 54608ad5-bfb7-4bb0-b199-b740c2ae20bd lvol 20 00:08:28.857 18:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=27018f02-863f-43e2-8064-fbe65fbe32f5 00:08:28.857 18:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:29.116 18:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 27018f02-863f-43e2-8064-fbe65fbe32f5 00:08:29.374 18:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:29.632 [2024-07-24 18:45:14.542479] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.632 18:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.891 18:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2353181 00:08:29.891 18:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:29.891 18:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:29.891 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.270 18:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 27018f02-863f-43e2-8064-fbe65fbe32f5 MY_SNAPSHOT 00:08:31.271 18:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=43c6b55e-f796-4b98-8534-23d2443a9bfc 00:08:31.271 18:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 27018f02-863f-43e2-8064-fbe65fbe32f5 30 00:08:31.529 18:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 43c6b55e-f796-4b98-8534-23d2443a9bfc MY_CLONE 00:08:31.788 18:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5fa763da-0abc-408a-892f-65293fb325b3 00:08:31.788 18:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5fa763da-0abc-408a-892f-65293fb325b3 00:08:32.724 18:45:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2353181 00:08:40.842 Initializing NVMe Controllers 00:08:40.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:40.842 Controller IO queue size 128, less than required. 00:08:40.842 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:40.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:40.842 Initialization complete. Launching workers. 00:08:40.842 ======================================================== 00:08:40.842 Latency(us) 00:08:40.842 Device Information : IOPS MiB/s Average min max 00:08:40.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 6955.16 27.17 18413.49 3427.99 86809.36 00:08:40.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8587.43 33.54 14918.42 4468.55 79432.65 00:08:40.842 ======================================================== 00:08:40.842 Total : 15542.59 60.71 16482.43 3427.99 86809.36 00:08:40.842 00:08:40.842 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.842 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 27018f02-863f-43e2-8064-fbe65fbe32f5 00:08:40.842 18:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54608ad5-bfb7-4bb0-b199-b740c2ae20bd 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.100 rmmod nvme_tcp 00:08:41.100 rmmod nvme_fabrics 00:08:41.100 rmmod nvme_keyring 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2352614 ']' 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2352614 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2352614 ']' 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2352614 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:41.100 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2352614 00:08:41.358 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:41.358 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:41.358 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2352614' 00:08:41.358 killing process with pid 2352614 00:08:41.358 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2352614 00:08:41.358 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2352614 00:08:41.617 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.617 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.617 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.617 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.617 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.617 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.617 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.617 18:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.523 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.523 00:08:43.523 real 0m22.934s 00:08:43.523 user 1m7.354s 00:08:43.523 sys 0m7.465s 00:08:43.523 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.523 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.523 ************************************ 00:08:43.523 END TEST nvmf_lvol 00:08:43.523 ************************************ 00:08:43.523 18:45:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:43.523 18:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.523 18:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.523 18:45:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.783 ************************************ 00:08:43.783 START TEST nvmf_lvs_grow 00:08:43.783 ************************************ 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:43.783 * Looking for test storage... 00:08:43.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:43.783 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.784 18:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:50.357 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:50.357 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:50.357 Found net devices under 0000:af:00.0: cvl_0_0 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:50.357 Found net devices under 0000:af:00.1: cvl_0_1 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.357 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:50.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:08:50.358 00:08:50.358 --- 10.0.0.2 ping statistics --- 00:08:50.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.358 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:08:50.358 00:08:50.358 --- 10.0.0.1 ping statistics --- 00:08:50.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.358 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2359000 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2359000 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2359000 ']' 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.358 18:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.358 [2024-07-24 18:45:34.628265] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:08:50.358 [2024-07-24 18:45:34.628321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.358 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.358 [2024-07-24 18:45:34.714952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.358 [2024-07-24 18:45:34.803987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.358 [2024-07-24 18:45:34.804031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.358 [2024-07-24 18:45:34.804041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.358 [2024-07-24 18:45:34.804050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.358 [2024-07-24 18:45:34.804057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.358 [2024-07-24 18:45:34.804079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.618 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.618 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:50.618 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.618 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:50.618 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.618 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.618 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.877 [2024-07-24 18:45:35.830703] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.877 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:50.877 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:50.877 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.877 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.136 ************************************ 00:08:51.136 START TEST lvs_grow_clean 00:08:51.136 ************************************ 00:08:51.136 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:51.136 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:51.136 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:51.136 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:51.136 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:51.136 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:51.136 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:51.136 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.136 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.136 18:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.136 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:51.136 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:51.395 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:08:51.395 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:08:51.395 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:51.654 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:51.654 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:51.654 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 lvol 150 00:08:51.913 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4aafd36d-28ca-479e-9a41-6ea31ddea5c3 00:08:51.913 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.913 18:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:52.175 [2024-07-24 18:45:37.046177] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:52.175 [2024-07-24 18:45:37.046240] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:52.175 true 00:08:52.175 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:08:52.175 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:52.434 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:52.434 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:52.693 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4aafd36d-28ca-479e-9a41-6ea31ddea5c3 00:08:52.952 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:52.952 [2024-07-24 18:45:37.952989] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.211 18:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.470 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2359574 00:08:53.470 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.470 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:53.470 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2359574 /var/tmp/bdevperf.sock 00:08:53.470 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2359574 ']' 00:08:53.470 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:53.470 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.470 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:53.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:53.470 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.470 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:53.470 [2024-07-24 18:45:38.275237] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:08:53.470 [2024-07-24 18:45:38.275297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359574 ] 00:08:53.470 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.470 [2024-07-24 18:45:38.356111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.470 [2024-07-24 18:45:38.456011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.729 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.729 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:53.729 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:53.988 Nvme0n1 00:08:53.988 18:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:54.247 [ 00:08:54.247 { 00:08:54.247 "name": "Nvme0n1", 00:08:54.247 "aliases": [ 00:08:54.247 "4aafd36d-28ca-479e-9a41-6ea31ddea5c3" 00:08:54.247 ], 00:08:54.247 "product_name": "NVMe disk", 00:08:54.247 "block_size": 4096, 00:08:54.247 "num_blocks": 38912, 00:08:54.247 "uuid": "4aafd36d-28ca-479e-9a41-6ea31ddea5c3", 00:08:54.247 "assigned_rate_limits": { 00:08:54.247 "rw_ios_per_sec": 0, 00:08:54.247 "rw_mbytes_per_sec": 0, 00:08:54.247 "r_mbytes_per_sec": 0, 00:08:54.247 "w_mbytes_per_sec": 0 00:08:54.247 }, 00:08:54.247 "claimed": false, 00:08:54.247 "zoned": false, 00:08:54.247 "supported_io_types": { 00:08:54.247 "read": true, 00:08:54.247 "write": true, 00:08:54.247 "unmap": true, 00:08:54.248 "flush": true, 00:08:54.248 "reset": true, 00:08:54.248 "nvme_admin": true, 00:08:54.248 "nvme_io": true, 00:08:54.248 "nvme_io_md": false, 00:08:54.248 "write_zeroes": true, 00:08:54.248 "zcopy": false, 00:08:54.248 "get_zone_info": false, 00:08:54.248 "zone_management": false, 00:08:54.248 "zone_append": false, 00:08:54.248 "compare": true, 00:08:54.248 "compare_and_write": true, 00:08:54.248 "abort": true, 00:08:54.248 "seek_hole": false, 00:08:54.248 "seek_data": false, 00:08:54.248 "copy": true, 00:08:54.248 "nvme_iov_md": false 00:08:54.248 }, 00:08:54.248 "memory_domains": [ 00:08:54.248 { 00:08:54.248 "dma_device_id": "system", 00:08:54.248 "dma_device_type": 1 00:08:54.248 } 00:08:54.248 ], 00:08:54.248 "driver_specific": { 00:08:54.248 "nvme": [ 00:08:54.248 { 00:08:54.248 "trid": { 00:08:54.248 "trtype": "TCP", 00:08:54.248 "adrfam": "IPv4", 00:08:54.248 "traddr": "10.0.0.2", 00:08:54.248 "trsvcid": "4420", 00:08:54.248 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:54.248 }, 00:08:54.248 "ctrlr_data": { 00:08:54.248 "cntlid": 1, 00:08:54.248 "vendor_id": "0x8086", 00:08:54.248 "model_number": "SPDK bdev Controller", 00:08:54.248 "serial_number": "SPDK0", 00:08:54.248 "firmware_revision": "24.09", 00:08:54.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.248 "oacs": { 00:08:54.248 "security": 0, 00:08:54.248 "format": 0, 00:08:54.248 "firmware": 0, 00:08:54.248 "ns_manage": 0 00:08:54.248 }, 00:08:54.248 "multi_ctrlr": true, 00:08:54.248 "ana_reporting": false 00:08:54.248 }, 00:08:54.248 "vs": { 00:08:54.248 "nvme_version": "1.3" 00:08:54.248 }, 00:08:54.248 "ns_data": { 00:08:54.248 "id": 1, 00:08:54.248 "can_share": true 00:08:54.248 } 00:08:54.248 } 00:08:54.248 ], 00:08:54.248 "mp_policy": "active_passive" 00:08:54.248 } 00:08:54.248 } 00:08:54.248 ] 00:08:54.248 18:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2359835 00:08:54.248 18:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:54.248 18:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:54.507 Running I/O for 10 seconds... 00:08:55.444 Latency(us) 00:08:55.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.444 Nvme0n1 : 1.00 14490.00 56.60 0.00 0.00 0.00 0.00 0.00 00:08:55.444 =================================================================================================================== 00:08:55.444 Total : 14490.00 56.60 0.00 0.00 0.00 0.00 0.00 00:08:55.444 00:08:56.381 18:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:08:56.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.381 Nvme0n1 : 2.00 14553.00 56.85 0.00 0.00 0.00 0.00 0.00 00:08:56.381 =================================================================================================================== 00:08:56.381 Total : 14553.00 56.85 0.00 0.00 0.00 0.00 0.00 00:08:56.381 00:08:56.640 true 00:08:56.640 18:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:08:56.640 18:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:56.899 18:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:56.899 18:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:56.899 18:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2359835 00:08:57.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.467 Nvme0n1 : 3.00 14579.33 56.95 0.00 0.00 0.00 0.00 0.00 00:08:57.467 =================================================================================================================== 00:08:57.467 Total : 14579.33 56.95 0.00 0.00 0.00 0.00 0.00 00:08:57.467 00:08:58.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.405 Nvme0n1 : 4.00 14604.50 57.05 0.00 0.00 0.00 0.00 0.00 00:08:58.405 =================================================================================================================== 00:08:58.405 Total : 14604.50 57.05 0.00 0.00 0.00 0.00 0.00 00:08:58.405 00:08:59.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.340 Nvme0n1 : 5.00 14629.20 57.15 0.00 0.00 0.00 0.00 0.00 00:08:59.340 =================================================================================================================== 00:08:59.340 Total : 14629.20 57.15 0.00 0.00 0.00 0.00 0.00 00:08:59.340 00:09:00.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.718 Nvme0n1 : 6.00 14647.00 57.21 0.00 0.00 0.00 0.00 0.00 00:09:00.718 =================================================================================================================== 00:09:00.718 Total : 14647.00 57.21 0.00 0.00 0.00 0.00 0.00 00:09:00.718 00:09:01.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.655 Nvme0n1 : 7.00 14660.86 57.27 0.00 0.00 0.00 0.00 0.00 00:09:01.655 =================================================================================================================== 00:09:01.655 Total : 14660.86 57.27 0.00 0.00 0.00 0.00 0.00 00:09:01.655 00:09:02.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.591 Nvme0n1 : 8.00 14676.25 57.33 0.00 0.00 0.00 0.00 0.00 00:09:02.591 =================================================================================================================== 00:09:02.591 Total : 14676.25 57.33 0.00 0.00 0.00 0.00 0.00 00:09:02.591 00:09:03.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.529 Nvme0n1 : 9.00 14688.22 57.38 0.00 0.00 0.00 0.00 0.00 00:09:03.529 =================================================================================================================== 00:09:03.529 Total : 14688.22 57.38 0.00 0.00 0.00 0.00 0.00 00:09:03.529 00:09:04.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.464 Nvme0n1 : 10.00 14697.00 57.41 0.00 0.00 0.00 0.00 0.00 00:09:04.465 =================================================================================================================== 00:09:04.465 Total : 14697.00 57.41 0.00 0.00 0.00 0.00 0.00 00:09:04.465 00:09:04.465 00:09:04.465 Latency(us) 00:09:04.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.465 Nvme0n1 : 10.01 14696.83 57.41 0.00 0.00 8701.13 2770.39 11200.70 00:09:04.465 =================================================================================================================== 00:09:04.465 Total : 14696.83 57.41 0.00 0.00 8701.13 2770.39 11200.70 00:09:04.465 0 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2359574 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2359574 ']' 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2359574 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2359574 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2359574' 00:09:04.465 killing process with pid 2359574 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2359574 00:09:04.465 Received shutdown signal, test time was about 10.000000 seconds 00:09:04.465 00:09:04.465 Latency(us) 00:09:04.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.465 =================================================================================================================== 00:09:04.465 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:04.465 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2359574 00:09:04.724 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:04.983 18:45:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:05.244 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:09:05.244 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:05.507 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:05.507 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:05.507 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.766 [2024-07-24 18:45:50.600503] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:05.766 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:09:06.025 request: 00:09:06.025 { 00:09:06.025 "uuid": "ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1", 00:09:06.025 "method": "bdev_lvol_get_lvstores", 00:09:06.025 "req_id": 1 00:09:06.025 } 00:09:06.025 Got JSON-RPC error response 00:09:06.025 response: 00:09:06.025 { 00:09:06.025 "code": -19, 00:09:06.025 "message": "No such device" 00:09:06.025 } 00:09:06.025 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:06.025 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.025 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:06.025 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.025 18:45:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.284 aio_bdev 00:09:06.284 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4aafd36d-28ca-479e-9a41-6ea31ddea5c3 00:09:06.284 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=4aafd36d-28ca-479e-9a41-6ea31ddea5c3 00:09:06.284 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:06.284 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:06.284 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:06.284 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:06.284 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:06.543 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4aafd36d-28ca-479e-9a41-6ea31ddea5c3 -t 2000 00:09:06.803 [ 00:09:06.803 { 00:09:06.803 "name": "4aafd36d-28ca-479e-9a41-6ea31ddea5c3", 00:09:06.803 "aliases": [ 00:09:06.803 "lvs/lvol" 00:09:06.803 ], 00:09:06.803 "product_name": "Logical Volume", 00:09:06.803 "block_size": 4096, 00:09:06.803 "num_blocks": 38912, 00:09:06.803 "uuid": "4aafd36d-28ca-479e-9a41-6ea31ddea5c3", 00:09:06.803 "assigned_rate_limits": { 00:09:06.803 "rw_ios_per_sec": 0, 00:09:06.803 "rw_mbytes_per_sec": 0, 00:09:06.803 "r_mbytes_per_sec": 0, 00:09:06.803 "w_mbytes_per_sec": 0 00:09:06.803 }, 00:09:06.803 "claimed": false, 00:09:06.803 "zoned": false, 00:09:06.803 "supported_io_types": { 00:09:06.803 "read": true, 00:09:06.803 "write": true, 00:09:06.803 "unmap": true, 00:09:06.803 "flush": false, 00:09:06.803 "reset": true, 00:09:06.803 "nvme_admin": false, 00:09:06.803 "nvme_io": false, 00:09:06.803 "nvme_io_md": false, 00:09:06.803 "write_zeroes": true, 00:09:06.803 "zcopy": false, 00:09:06.803 "get_zone_info": false, 00:09:06.803 "zone_management": false, 00:09:06.803 "zone_append": false, 00:09:06.803 "compare": false, 00:09:06.803 "compare_and_write": false, 00:09:06.803 "abort": false, 00:09:06.803 "seek_hole": true, 00:09:06.803 "seek_data": true, 00:09:06.803 "copy": false, 00:09:06.803 "nvme_iov_md": false 00:09:06.803 }, 00:09:06.803 "driver_specific": { 00:09:06.803 "lvol": { 00:09:06.803 "lvol_store_uuid": "ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1", 00:09:06.803 "base_bdev": "aio_bdev", 00:09:06.803 "thin_provision": false, 00:09:06.803 "num_allocated_clusters": 38, 00:09:06.803 "snapshot": false, 00:09:06.803 "clone": false, 00:09:06.803 "esnap_clone": false 00:09:06.803 } 00:09:06.803 } 00:09:06.803 } 00:09:06.803 ] 00:09:06.803 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:06.803 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:09:06.803 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:07.064 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:07.064 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:09:07.064 18:45:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:07.323 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:07.323 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4aafd36d-28ca-479e-9a41-6ea31ddea5c3 00:09:07.582 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef1c4f3c-9772-40f9-92e8-e1f4f53c58f1 00:09:07.840 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.099 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.099 00:09:08.099 real 0m17.044s 00:09:08.099 user 0m16.765s 00:09:08.099 sys 0m1.634s 00:09:08.099 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.099 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:08.099 ************************************ 00:09:08.099 END TEST lvs_grow_clean 00:09:08.099 ************************************ 00:09:08.099 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:08.099 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.099 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.099 18:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.099 ************************************ 00:09:08.099 START TEST lvs_grow_dirty 00:09:08.099 ************************************ 00:09:08.099 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:08.099 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:08.099 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:08.099 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:08.099 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:08.099 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:08.099 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:08.099 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.099 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.099 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.358 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:08.358 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:08.616 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:08.616 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:08.616 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:08.874 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:08.874 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:08.874 18:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 lvol 150 00:09:09.133 18:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1f00bff1-8ea9-4a75-ad8e-00aae4c356f9 00:09:09.133 18:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.133 18:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:09.391 [2024-07-24 18:45:54.264243] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:09.391 [2024-07-24 18:45:54.264306] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:09.391 true 00:09:09.391 18:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:09.391 18:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:09.650 18:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:09.650 18:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:09.909 18:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1f00bff1-8ea9-4a75-ad8e-00aae4c356f9 00:09:10.168 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:10.427 [2024-07-24 18:45:55.243233] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.427 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.685 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2362783 00:09:10.685 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.685 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:10.685 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2362783 /var/tmp/bdevperf.sock 00:09:10.685 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2362783 ']' 00:09:10.685 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.686 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.686 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.686 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.686 18:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:10.686 [2024-07-24 18:45:55.553021] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:09:10.686 [2024-07-24 18:45:55.553081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362783 ] 00:09:10.686 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.686 [2024-07-24 18:45:55.633645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.944 [2024-07-24 18:45:55.737641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.513 18:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.513 18:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:11.513 18:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:11.772 Nvme0n1 00:09:11.772 18:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.030 [ 00:09:12.030 { 00:09:12.030 "name": "Nvme0n1", 00:09:12.030 "aliases": [ 00:09:12.030 "1f00bff1-8ea9-4a75-ad8e-00aae4c356f9" 00:09:12.030 ], 00:09:12.030 "product_name": "NVMe disk", 00:09:12.030 "block_size": 4096, 00:09:12.030 "num_blocks": 38912, 00:09:12.030 "uuid": "1f00bff1-8ea9-4a75-ad8e-00aae4c356f9", 00:09:12.030 "assigned_rate_limits": { 00:09:12.030 "rw_ios_per_sec": 0, 00:09:12.030 "rw_mbytes_per_sec": 0, 00:09:12.030 "r_mbytes_per_sec": 0, 00:09:12.030 "w_mbytes_per_sec": 0 00:09:12.030 }, 00:09:12.030 "claimed": false, 00:09:12.030 "zoned": false, 00:09:12.030 "supported_io_types": { 00:09:12.030 "read": true, 00:09:12.030 "write": true, 00:09:12.030 "unmap": true, 00:09:12.030 "flush": true, 00:09:12.030 "reset": true, 00:09:12.030 "nvme_admin": true, 00:09:12.030 "nvme_io": true, 00:09:12.030 "nvme_io_md": false, 00:09:12.030 "write_zeroes": true, 00:09:12.030 "zcopy": false, 00:09:12.030 "get_zone_info": false, 00:09:12.030 "zone_management": false, 00:09:12.030 "zone_append": false, 00:09:12.030 "compare": true, 00:09:12.030 "compare_and_write": true, 00:09:12.030 "abort": true, 00:09:12.030 "seek_hole": false, 00:09:12.030 "seek_data": false, 00:09:12.030 "copy": true, 00:09:12.030 "nvme_iov_md": false 00:09:12.030 }, 00:09:12.030 "memory_domains": [ 00:09:12.030 { 00:09:12.030 "dma_device_id": "system", 00:09:12.030 "dma_device_type": 1 00:09:12.030 } 00:09:12.030 ], 00:09:12.030 "driver_specific": { 00:09:12.030 "nvme": [ 00:09:12.030 { 00:09:12.030 "trid": { 00:09:12.030 "trtype": "TCP", 00:09:12.030 "adrfam": "IPv4", 00:09:12.030 "traddr": "10.0.0.2", 00:09:12.030 "trsvcid": "4420", 00:09:12.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.030 }, 00:09:12.030 "ctrlr_data": { 00:09:12.030 "cntlid": 1, 00:09:12.030 "vendor_id": "0x8086", 00:09:12.030 "model_number": "SPDK bdev Controller", 00:09:12.030 "serial_number": "SPDK0", 00:09:12.030 "firmware_revision": "24.09", 00:09:12.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.030 "oacs": { 00:09:12.030 "security": 0, 00:09:12.030 "format": 0, 00:09:12.030 "firmware": 0, 00:09:12.030 "ns_manage": 0 00:09:12.030 }, 00:09:12.030 "multi_ctrlr": true, 00:09:12.030 "ana_reporting": false 00:09:12.030 }, 00:09:12.031 "vs": { 00:09:12.031 "nvme_version": "1.3" 00:09:12.031 }, 00:09:12.031 "ns_data": { 00:09:12.031 "id": 1, 00:09:12.031 "can_share": true 00:09:12.031 } 00:09:12.031 } 00:09:12.031 ], 00:09:12.031 "mp_policy": "active_passive" 00:09:12.031 } 00:09:12.031 } 00:09:12.031 ] 00:09:12.031 18:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2363056 00:09:12.031 18:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.031 18:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:12.289 Running I/O for 10 seconds... 00:09:13.225 Latency(us) 00:09:13.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.225 Nvme0n1 : 1.00 15076.00 58.89 0.00 0.00 0.00 0.00 0.00 00:09:13.225 =================================================================================================================== 00:09:13.225 Total : 15076.00 58.89 0.00 0.00 0.00 0.00 0.00 00:09:13.225 00:09:14.162 18:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:14.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.162 Nvme0n1 : 2.00 15145.50 59.16 0.00 0.00 0.00 0.00 0.00 00:09:14.162 =================================================================================================================== 00:09:14.162 Total : 15145.50 59.16 0.00 0.00 0.00 0.00 0.00 00:09:14.162 00:09:14.420 true 00:09:14.420 18:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:14.420 18:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:14.679 18:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:14.679 18:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:14.679 18:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2363056 00:09:15.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.247 Nvme0n1 : 3.00 15177.00 59.29 0.00 0.00 0.00 0.00 0.00 00:09:15.247 =================================================================================================================== 00:09:15.247 Total : 15177.00 59.29 0.00 0.00 0.00 0.00 0.00 00:09:15.247 00:09:16.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.183 Nvme0n1 : 4.00 15213.25 59.43 0.00 0.00 0.00 0.00 0.00 00:09:16.183 =================================================================================================================== 00:09:16.183 Total : 15213.25 59.43 0.00 0.00 0.00 0.00 0.00 00:09:16.183 00:09:17.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.118 Nvme0n1 : 5.00 15244.00 59.55 0.00 0.00 0.00 0.00 0.00 00:09:17.118 =================================================================================================================== 00:09:17.118 Total : 15244.00 59.55 0.00 0.00 0.00 0.00 0.00 00:09:17.118 00:09:18.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.532 Nvme0n1 : 6.00 15264.83 59.63 0.00 0.00 0.00 0.00 0.00 00:09:18.532 =================================================================================================================== 00:09:18.532 Total : 15264.83 59.63 0.00 0.00 0.00 0.00 0.00 00:09:18.532 00:09:19.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.468 Nvme0n1 : 7.00 15279.43 59.69 0.00 0.00 0.00 0.00 0.00 00:09:19.468 =================================================================================================================== 00:09:19.468 Total : 15279.43 59.69 0.00 0.00 0.00 0.00 0.00 00:09:19.468 00:09:20.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.404 Nvme0n1 : 8.00 15298.50 59.76 0.00 0.00 0.00 0.00 0.00 00:09:20.404 =================================================================================================================== 00:09:20.404 Total : 15298.50 59.76 0.00 0.00 0.00 0.00 0.00 00:09:20.404 00:09:21.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.342 Nvme0n1 : 9.00 15313.33 59.82 0.00 0.00 0.00 0.00 0.00 00:09:21.342 =================================================================================================================== 00:09:21.342 Total : 15313.33 59.82 0.00 0.00 0.00 0.00 0.00 00:09:21.342 00:09:22.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.279 Nvme0n1 : 10.00 15315.90 59.83 0.00 0.00 0.00 0.00 0.00 00:09:22.279 =================================================================================================================== 00:09:22.279 Total : 15315.90 59.83 0.00 0.00 0.00 0.00 0.00 00:09:22.279 00:09:22.279 00:09:22.279 Latency(us) 00:09:22.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.279 Nvme0n1 : 10.01 15316.60 59.83 0.00 0.00 8350.07 3961.95 16562.73 00:09:22.279 =================================================================================================================== 00:09:22.279 Total : 15316.60 59.83 0.00 0.00 8350.07 3961.95 16562.73 00:09:22.279 0 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2362783 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2362783 ']' 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2362783 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2362783 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2362783' 00:09:22.279 killing process with pid 2362783 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2362783 00:09:22.279 Received shutdown signal, test time was about 10.000000 seconds 00:09:22.279 00:09:22.279 Latency(us) 00:09:22.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.279 =================================================================================================================== 00:09:22.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.279 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2362783 00:09:22.538 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.797 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.056 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:23.056 18:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2359000 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2359000 00:09:23.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2359000 Killed "${NVMF_APP[@]}" "$@" 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2365157 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2365157 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2365157 ']' 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.315 18:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.315 [2024-07-24 18:46:08.302880] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:09:23.315 [2024-07-24 18:46:08.302941] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.573 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.573 [2024-07-24 18:46:08.389804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.573 [2024-07-24 18:46:08.478297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.573 [2024-07-24 18:46:08.478339] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.574 [2024-07-24 18:46:08.478349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.574 [2024-07-24 18:46:08.478358] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.574 [2024-07-24 18:46:08.478366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.574 [2024-07-24 18:46:08.478387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.510 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.510 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:24.510 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.510 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.510 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.510 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.510 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.510 [2024-07-24 18:46:09.506335] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:24.510 [2024-07-24 18:46:09.506470] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:24.510 [2024-07-24 18:46:09.506508] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:24.769 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:24.769 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1f00bff1-8ea9-4a75-ad8e-00aae4c356f9 00:09:24.769 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=1f00bff1-8ea9-4a75-ad8e-00aae4c356f9 00:09:24.769 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:24.769 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:24.769 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:24.769 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:24.769 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.029 18:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1f00bff1-8ea9-4a75-ad8e-00aae4c356f9 -t 2000 00:09:25.029 [ 00:09:25.029 { 00:09:25.029 "name": "1f00bff1-8ea9-4a75-ad8e-00aae4c356f9", 00:09:25.029 "aliases": [ 00:09:25.029 "lvs/lvol" 00:09:25.029 ], 00:09:25.029 "product_name": "Logical Volume", 00:09:25.029 "block_size": 4096, 00:09:25.029 "num_blocks": 38912, 00:09:25.029 "uuid": "1f00bff1-8ea9-4a75-ad8e-00aae4c356f9", 00:09:25.029 "assigned_rate_limits": { 00:09:25.029 "rw_ios_per_sec": 0, 00:09:25.029 "rw_mbytes_per_sec": 0, 00:09:25.029 "r_mbytes_per_sec": 0, 00:09:25.029 "w_mbytes_per_sec": 0 00:09:25.029 }, 00:09:25.029 "claimed": false, 00:09:25.029 "zoned": false, 00:09:25.029 "supported_io_types": { 00:09:25.029 "read": true, 00:09:25.029 "write": true, 00:09:25.029 "unmap": true, 00:09:25.029 "flush": false, 00:09:25.029 "reset": true, 00:09:25.029 "nvme_admin": false, 00:09:25.029 "nvme_io": false, 00:09:25.029 "nvme_io_md": false, 00:09:25.029 "write_zeroes": true, 00:09:25.029 "zcopy": false, 00:09:25.029 "get_zone_info": false, 00:09:25.029 "zone_management": false, 00:09:25.029 "zone_append": false, 00:09:25.029 "compare": false, 00:09:25.029 "compare_and_write": false, 00:09:25.029 "abort": false, 00:09:25.029 "seek_hole": true, 00:09:25.029 "seek_data": true, 00:09:25.029 "copy": false, 00:09:25.029 "nvme_iov_md": false 00:09:25.029 }, 00:09:25.029 "driver_specific": { 00:09:25.029 "lvol": { 00:09:25.029 "lvol_store_uuid": "f7f28b92-44d7-48c8-a5fb-6780d176fad4", 00:09:25.029 "base_bdev": "aio_bdev", 00:09:25.029 "thin_provision": false, 00:09:25.029 "num_allocated_clusters": 38, 00:09:25.029 "snapshot": false, 00:09:25.029 "clone": false, 00:09:25.029 "esnap_clone": false 00:09:25.029 } 00:09:25.029 } 00:09:25.029 } 00:09:25.029 ] 00:09:25.029 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:25.029 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:25.029 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:25.288 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:25.288 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:25.288 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:25.546 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:25.546 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.806 [2024-07-24 18:46:10.759277] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:25.806 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:25.806 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:25.806 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:25.806 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.806 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.806 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.065 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.065 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.065 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.065 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.065 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:26.065 18:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:26.065 request: 00:09:26.065 { 00:09:26.065 "uuid": "f7f28b92-44d7-48c8-a5fb-6780d176fad4", 00:09:26.065 "method": "bdev_lvol_get_lvstores", 00:09:26.065 "req_id": 1 00:09:26.065 } 00:09:26.065 Got JSON-RPC error response 00:09:26.065 response: 00:09:26.065 { 00:09:26.065 "code": -19, 00:09:26.065 "message": "No such device" 00:09:26.065 } 00:09:26.065 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:26.065 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.065 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:26.065 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.065 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.324 aio_bdev 00:09:26.324 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1f00bff1-8ea9-4a75-ad8e-00aae4c356f9 00:09:26.324 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=1f00bff1-8ea9-4a75-ad8e-00aae4c356f9 00:09:26.324 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:26.324 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:26.325 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:26.325 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:26.325 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:26.583 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1f00bff1-8ea9-4a75-ad8e-00aae4c356f9 -t 2000 00:09:26.841 [ 00:09:26.841 { 00:09:26.841 "name": "1f00bff1-8ea9-4a75-ad8e-00aae4c356f9", 00:09:26.841 "aliases": [ 00:09:26.841 "lvs/lvol" 00:09:26.841 ], 00:09:26.841 "product_name": "Logical Volume", 00:09:26.841 "block_size": 4096, 00:09:26.841 "num_blocks": 38912, 00:09:26.841 "uuid": "1f00bff1-8ea9-4a75-ad8e-00aae4c356f9", 00:09:26.841 "assigned_rate_limits": { 00:09:26.841 "rw_ios_per_sec": 0, 00:09:26.841 "rw_mbytes_per_sec": 0, 00:09:26.841 "r_mbytes_per_sec": 0, 00:09:26.841 "w_mbytes_per_sec": 0 00:09:26.841 }, 00:09:26.841 "claimed": false, 00:09:26.841 "zoned": false, 00:09:26.841 "supported_io_types": { 00:09:26.841 "read": true, 00:09:26.841 "write": true, 00:09:26.841 "unmap": true, 00:09:26.841 "flush": false, 00:09:26.841 "reset": true, 00:09:26.841 "nvme_admin": false, 00:09:26.841 "nvme_io": false, 00:09:26.841 "nvme_io_md": false, 00:09:26.841 "write_zeroes": true, 00:09:26.841 "zcopy": false, 00:09:26.841 "get_zone_info": false, 00:09:26.841 "zone_management": false, 00:09:26.841 "zone_append": false, 00:09:26.841 "compare": false, 00:09:26.841 "compare_and_write": false, 00:09:26.841 "abort": false, 00:09:26.841 "seek_hole": true, 00:09:26.841 "seek_data": true, 00:09:26.841 "copy": false, 00:09:26.841 "nvme_iov_md": false 00:09:26.841 }, 00:09:26.841 "driver_specific": { 00:09:26.841 "lvol": { 00:09:26.841 "lvol_store_uuid": "f7f28b92-44d7-48c8-a5fb-6780d176fad4", 00:09:26.841 "base_bdev": "aio_bdev", 00:09:26.841 "thin_provision": false, 00:09:26.841 "num_allocated_clusters": 38, 00:09:26.841 "snapshot": false, 00:09:26.841 "clone": false, 00:09:26.841 "esnap_clone": false 00:09:26.841 } 00:09:26.841 } 00:09:26.841 } 00:09:26.841 ] 00:09:26.841 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:26.841 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:26.841 18:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:27.101 18:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:27.101 18:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:27.101 18:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:27.359 18:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:27.359 18:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1f00bff1-8ea9-4a75-ad8e-00aae4c356f9 00:09:27.618 18:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7f28b92-44d7-48c8-a5fb-6780d176fad4 00:09:27.877 18:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.136 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.136 00:09:28.136 real 0m20.068s 00:09:28.136 user 0m50.481s 00:09:28.136 sys 0m3.871s 00:09:28.136 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.136 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.136 ************************************ 00:09:28.136 END TEST lvs_grow_dirty 00:09:28.136 ************************************ 00:09:28.136 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:28.136 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:28.136 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:28.136 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:28.136 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:28.137 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:28.137 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:28.137 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:28.137 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:28.137 nvmf_trace.0 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.396 rmmod nvme_tcp 00:09:28.396 rmmod nvme_fabrics 00:09:28.396 rmmod nvme_keyring 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2365157 ']' 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2365157 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2365157 ']' 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2365157 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2365157 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2365157' 00:09:28.396 killing process with pid 2365157 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2365157 00:09:28.396 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2365157 00:09:28.654 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.654 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.654 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.654 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.654 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.654 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.654 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.654 18:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.558 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.558 00:09:30.558 real 0m47.001s 00:09:30.558 user 1m14.621s 00:09:30.558 sys 0m10.434s 00:09:30.558 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.558 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:30.558 ************************************ 00:09:30.558 END TEST nvmf_lvs_grow 00:09:30.558 ************************************ 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.820 ************************************ 00:09:30.820 START TEST nvmf_bdev_io_wait 00:09:30.820 ************************************ 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:30.820 * Looking for test storage... 00:09:30.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.820 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:30.821 18:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.421 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:37.422 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:37.422 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:37.422 Found net devices under 0000:af:00.0: cvl_0_0 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:37.422 Found net devices under 0000:af:00.1: cvl_0_1 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:37.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:09:37.422 00:09:37.422 --- 10.0.0.2 ping statistics --- 00:09:37.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.422 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:09:37.422 00:09:37.422 --- 10.0.0.1 ping statistics --- 00:09:37.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.422 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2369733 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2369733 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2369733 ']' 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.422 18:46:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.422 [2024-07-24 18:46:21.780658] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:09:37.422 [2024-07-24 18:46:21.780766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.422 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.422 [2024-07-24 18:46:21.906048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.422 [2024-07-24 18:46:22.000128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.422 [2024-07-24 18:46:22.000173] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.422 [2024-07-24 18:46:22.000184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.422 [2024-07-24 18:46:22.000193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.422 [2024-07-24 18:46:22.000201] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.422 [2024-07-24 18:46:22.000308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.422 [2024-07-24 18:46:22.000421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.422 [2024-07-24 18:46:22.000536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.422 [2024-07-24 18:46:22.000537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.991 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 [2024-07-24 18:46:22.806805] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 Malloc0 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 [2024-07-24 18:46:22.874664] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2370012 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2370014 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:37.992 { 00:09:37.992 "params": { 00:09:37.992 "name": "Nvme$subsystem", 00:09:37.992 "trtype": "$TEST_TRANSPORT", 00:09:37.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.992 "adrfam": "ipv4", 00:09:37.992 "trsvcid": "$NVMF_PORT", 00:09:37.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.992 "hdgst": ${hdgst:-false}, 00:09:37.992 "ddgst": ${ddgst:-false} 00:09:37.992 }, 00:09:37.992 "method": "bdev_nvme_attach_controller" 00:09:37.992 } 00:09:37.992 EOF 00:09:37.992 )") 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2370016 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:37.992 { 00:09:37.992 "params": { 00:09:37.992 "name": "Nvme$subsystem", 00:09:37.992 "trtype": "$TEST_TRANSPORT", 00:09:37.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.992 "adrfam": "ipv4", 00:09:37.992 "trsvcid": "$NVMF_PORT", 00:09:37.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.992 "hdgst": ${hdgst:-false}, 00:09:37.992 "ddgst": ${ddgst:-false} 00:09:37.992 }, 00:09:37.992 "method": "bdev_nvme_attach_controller" 00:09:37.992 } 00:09:37.992 EOF 00:09:37.992 )") 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2370019 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:37.992 { 00:09:37.992 "params": { 00:09:37.992 "name": "Nvme$subsystem", 00:09:37.992 "trtype": "$TEST_TRANSPORT", 00:09:37.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.992 "adrfam": "ipv4", 00:09:37.992 "trsvcid": "$NVMF_PORT", 00:09:37.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.992 "hdgst": ${hdgst:-false}, 00:09:37.992 "ddgst": ${ddgst:-false} 00:09:37.992 }, 00:09:37.992 "method": "bdev_nvme_attach_controller" 00:09:37.992 } 00:09:37.992 EOF 00:09:37.992 )") 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:37.992 { 00:09:37.992 "params": { 00:09:37.992 "name": "Nvme$subsystem", 00:09:37.992 "trtype": "$TEST_TRANSPORT", 00:09:37.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.992 "adrfam": "ipv4", 00:09:37.992 "trsvcid": "$NVMF_PORT", 00:09:37.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.992 "hdgst": ${hdgst:-false}, 00:09:37.992 "ddgst": ${ddgst:-false} 00:09:37.992 }, 00:09:37.992 "method": "bdev_nvme_attach_controller" 00:09:37.992 } 00:09:37.992 EOF 00:09:37.992 )") 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2370012 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:37.992 "params": { 00:09:37.992 "name": "Nvme1", 00:09:37.992 "trtype": "tcp", 00:09:37.992 "traddr": "10.0.0.2", 00:09:37.992 "adrfam": "ipv4", 00:09:37.992 "trsvcid": "4420", 00:09:37.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.992 "hdgst": false, 00:09:37.992 "ddgst": false 00:09:37.992 }, 00:09:37.992 "method": "bdev_nvme_attach_controller" 00:09:37.992 }' 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:37.992 "params": { 00:09:37.992 "name": "Nvme1", 00:09:37.992 "trtype": "tcp", 00:09:37.992 "traddr": "10.0.0.2", 00:09:37.992 "adrfam": "ipv4", 00:09:37.992 "trsvcid": "4420", 00:09:37.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.992 "hdgst": false, 00:09:37.992 "ddgst": false 00:09:37.992 }, 00:09:37.992 "method": "bdev_nvme_attach_controller" 00:09:37.992 }' 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:37.992 "params": { 00:09:37.992 "name": "Nvme1", 00:09:37.992 "trtype": "tcp", 00:09:37.992 "traddr": "10.0.0.2", 00:09:37.992 "adrfam": "ipv4", 00:09:37.992 "trsvcid": "4420", 00:09:37.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.992 "hdgst": false, 00:09:37.992 "ddgst": false 00:09:37.992 }, 00:09:37.992 "method": "bdev_nvme_attach_controller" 00:09:37.992 }' 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:37.992 18:46:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:37.992 "params": { 00:09:37.992 "name": "Nvme1", 00:09:37.992 "trtype": "tcp", 00:09:37.992 "traddr": "10.0.0.2", 00:09:37.992 "adrfam": "ipv4", 00:09:37.992 "trsvcid": "4420", 00:09:37.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.993 "hdgst": false, 00:09:37.993 "ddgst": false 00:09:37.993 }, 00:09:37.993 "method": "bdev_nvme_attach_controller" 00:09:37.993 }' 00:09:37.993 [2024-07-24 18:46:22.925281] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:09:37.993 [2024-07-24 18:46:22.925324] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:37.993 [2024-07-24 18:46:22.926106] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:09:37.993 [2024-07-24 18:46:22.926143] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:37.993 [2024-07-24 18:46:22.931696] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:09:37.993 [2024-07-24 18:46:22.931751] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:37.993 [2024-07-24 18:46:22.932807] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:09:37.993 [2024-07-24 18:46:22.932860] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:37.993 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.252 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.252 [2024-07-24 18:46:23.127746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.252 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.252 [2024-07-24 18:46:23.187865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.252 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.511 [2024-07-24 18:46:23.269487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:38.511 [2024-07-24 18:46:23.277519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:38.511 [2024-07-24 18:46:23.282182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.511 [2024-07-24 18:46:23.344183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.511 [2024-07-24 18:46:23.385331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:38.512 [2024-07-24 18:46:23.434930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:38.512 Running I/O for 1 seconds... 00:09:38.512 Running I/O for 1 seconds... 00:09:38.769 Running I/O for 1 seconds... 00:09:38.770 Running I/O for 1 seconds... 00:09:39.705 00:09:39.705 Latency(us) 00:09:39.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.705 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:39.705 Nvme1n1 : 1.01 7633.51 29.82 0.00 0.00 16677.38 10545.34 24188.74 00:09:39.705 =================================================================================================================== 00:09:39.705 Total : 7633.51 29.82 0.00 0.00 16677.38 10545.34 24188.74 00:09:39.705 00:09:39.705 Latency(us) 00:09:39.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.705 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:39.705 Nvme1n1 : 1.03 2710.40 10.59 0.00 0.00 46079.68 7745.16 69587.32 00:09:39.705 =================================================================================================================== 00:09:39.705 Total : 2710.40 10.59 0.00 0.00 46079.68 7745.16 69587.32 00:09:39.705 00:09:39.705 Latency(us) 00:09:39.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.705 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:39.705 Nvme1n1 : 1.01 3040.42 11.88 0.00 0.00 41852.54 11379.43 104380.97 00:09:39.705 =================================================================================================================== 00:09:39.705 Total : 3040.42 11.88 0.00 0.00 41852.54 11379.43 104380.97 00:09:39.963 00:09:39.963 Latency(us) 00:09:39.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.963 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:39.963 Nvme1n1 : 1.00 162588.69 635.11 0.00 0.00 784.21 314.65 923.46 00:09:39.963 =================================================================================================================== 00:09:39.963 Total : 162588.69 635.11 0.00 0.00 784.21 314.65 923.46 00:09:39.963 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2370014 00:09:39.963 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2370016 00:09:39.963 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2370019 00:09:39.964 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.964 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.964 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.222 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.222 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:40.222 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:40.222 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.222 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:40.222 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.222 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:40.222 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.222 18:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.222 rmmod nvme_tcp 00:09:40.222 rmmod nvme_fabrics 00:09:40.222 rmmod nvme_keyring 00:09:40.222 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2369733 ']' 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2369733 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2369733 ']' 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2369733 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2369733 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2369733' 00:09:40.223 killing process with pid 2369733 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2369733 00:09:40.223 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2369733 00:09:40.481 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.481 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.481 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.481 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.481 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.481 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.481 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.481 18:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.385 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:42.385 00:09:42.385 real 0m11.735s 00:09:42.385 user 0m20.811s 00:09:42.385 sys 0m6.187s 00:09:42.385 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.385 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.385 ************************************ 00:09:42.385 END TEST nvmf_bdev_io_wait 00:09:42.385 ************************************ 00:09:42.385 18:46:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:42.385 18:46:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:42.385 18:46:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.385 18:46:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.644 ************************************ 00:09:42.644 START TEST nvmf_queue_depth 00:09:42.644 ************************************ 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:42.644 * Looking for test storage... 00:09:42.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.644 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:42.645 18:46:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.217 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:49.218 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:49.218 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:49.218 Found net devices under 0000:af:00.0: cvl_0_0 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:49.218 Found net devices under 0000:af:00.1: cvl_0_1 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:49.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:09:49.218 00:09:49.218 --- 10.0.0.2 ping statistics --- 00:09:49.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.218 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:09:49.218 00:09:49.218 --- 10.0.0.1 ping statistics --- 00:09:49.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.218 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:09:49.218 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2374036 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2374036 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2374036 ']' 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.219 18:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.219 [2024-07-24 18:46:33.496480] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:09:49.219 [2024-07-24 18:46:33.496535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.219 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.219 [2024-07-24 18:46:33.582415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.219 [2024-07-24 18:46:33.685502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.219 [2024-07-24 18:46:33.685546] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.219 [2024-07-24 18:46:33.685559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.219 [2024-07-24 18:46:33.685571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.219 [2024-07-24 18:46:33.685580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.219 [2024-07-24 18:46:33.685619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.477 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.477 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:49.477 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:49.477 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:49.477 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.477 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.477 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:49.478 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.478 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.478 [2024-07-24 18:46:34.485135] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.737 Malloc0 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.737 [2024-07-24 18:46:34.553907] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2374291 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2374291 /var/tmp/bdevperf.sock 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2374291 ']' 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:49.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.737 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.737 [2024-07-24 18:46:34.609006] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:09:49.737 [2024-07-24 18:46:34.609066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374291 ] 00:09:49.737 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.737 [2024-07-24 18:46:34.691675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.996 [2024-07-24 18:46:34.783530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.996 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.996 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:49.996 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:49.996 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.996 18:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.256 NVMe0n1 00:09:50.256 18:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.256 18:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:50.256 Running I/O for 10 seconds... 00:10:02.509 00:10:02.509 Latency(us) 00:10:02.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.509 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:02.509 Verification LBA range: start 0x0 length 0x4000 00:10:02.509 NVMe0n1 : 10.12 6467.06 25.26 0.00 0.00 157515.87 29669.93 95801.72 00:10:02.509 =================================================================================================================== 00:10:02.509 Total : 6467.06 25.26 0.00 0.00 157515.87 29669.93 95801.72 00:10:02.509 0 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2374291 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2374291 ']' 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2374291 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2374291 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2374291' 00:10:02.509 killing process with pid 2374291 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2374291 00:10:02.509 Received shutdown signal, test time was about 10.000000 seconds 00:10:02.509 00:10:02.509 Latency(us) 00:10:02.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.509 =================================================================================================================== 00:10:02.509 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2374291 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:02.509 rmmod nvme_tcp 00:10:02.509 rmmod nvme_fabrics 00:10:02.509 rmmod nvme_keyring 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2374036 ']' 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2374036 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2374036 ']' 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2374036 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2374036 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2374036' 00:10:02.509 killing process with pid 2374036 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2374036 00:10:02.509 18:46:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2374036 00:10:02.509 18:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:02.509 18:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:02.509 18:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:02.509 18:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:02.509 18:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:02.509 18:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.509 18:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.509 18:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.078 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:03.078 00:10:03.078 real 0m20.652s 00:10:03.078 user 0m24.793s 00:10:03.078 sys 0m5.800s 00:10:03.078 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.078 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.078 ************************************ 00:10:03.078 END TEST nvmf_queue_depth 00:10:03.078 ************************************ 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.338 ************************************ 00:10:03.338 START TEST nvmf_target_multipath 00:10:03.338 ************************************ 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:03.338 * Looking for test storage... 00:10:03.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.338 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.339 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.339 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:03.339 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:03.339 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:03.339 18:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:09.914 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:09.914 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:09.914 Found net devices under 0000:af:00.0: cvl_0_0 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.914 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:09.915 Found net devices under 0000:af:00.1: cvl_0_1 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:09.915 18:46:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:09.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:10:09.915 00:10:09.915 --- 10.0.0.2 ping statistics --- 00:10:09.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.915 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:10:09.915 00:10:09.915 --- 10.0.0.1 ping statistics --- 00:10:09.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.915 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:09.915 only one NIC for nvmf test 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.915 rmmod nvme_tcp 00:10:09.915 rmmod nvme_fabrics 00:10:09.915 rmmod nvme_keyring 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.915 18:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.294 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.294 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:11.294 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:11.294 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.294 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:11.294 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.295 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:11.295 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.295 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.295 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.295 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:11.295 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:11.295 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.554 00:10:11.554 real 0m8.166s 00:10:11.554 user 0m1.657s 00:10:11.554 sys 0m4.459s 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:11.554 ************************************ 00:10:11.554 END TEST nvmf_target_multipath 00:10:11.554 ************************************ 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.554 ************************************ 00:10:11.554 START TEST nvmf_zcopy 00:10:11.554 ************************************ 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:11.554 * Looking for test storage... 00:10:11.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.554 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.555 18:46:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:18.130 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:18.130 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:18.130 Found net devices under 0000:af:00.0: cvl_0_0 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:18.130 Found net devices under 0000:af:00.1: cvl_0_1 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:18.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:10:18.130 00:10:18.130 --- 10.0.0.2 ping statistics --- 00:10:18.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.130 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:10:18.130 00:10:18.130 --- 10.0.0.1 ping statistics --- 00:10:18.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.130 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2383407 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2383407 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2383407 ']' 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.130 18:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.131 [2024-07-24 18:47:02.499645] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:10:18.131 [2024-07-24 18:47:02.499699] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.131 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.131 [2024-07-24 18:47:02.586213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.131 [2024-07-24 18:47:02.691823] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.131 [2024-07-24 18:47:02.691866] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.131 [2024-07-24 18:47:02.691879] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.131 [2024-07-24 18:47:02.691890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.131 [2024-07-24 18:47:02.691899] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.131 [2024-07-24 18:47:02.691925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.697 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.697 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:18.697 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:18.697 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:18.697 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.955 [2024-07-24 18:47:03.744551] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.955 [2024-07-24 18:47:03.768763] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.955 malloc0 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:18.955 { 00:10:18.955 "params": { 00:10:18.955 "name": "Nvme$subsystem", 00:10:18.955 "trtype": "$TEST_TRANSPORT", 00:10:18.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.955 "adrfam": "ipv4", 00:10:18.955 "trsvcid": "$NVMF_PORT", 00:10:18.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.955 "hdgst": ${hdgst:-false}, 00:10:18.955 "ddgst": ${ddgst:-false} 00:10:18.955 }, 00:10:18.955 "method": "bdev_nvme_attach_controller" 00:10:18.955 } 00:10:18.955 EOF 00:10:18.955 )") 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:18.955 18:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:18.955 "params": { 00:10:18.955 "name": "Nvme1", 00:10:18.955 "trtype": "tcp", 00:10:18.955 "traddr": "10.0.0.2", 00:10:18.955 "adrfam": "ipv4", 00:10:18.955 "trsvcid": "4420", 00:10:18.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.955 "hdgst": false, 00:10:18.955 "ddgst": false 00:10:18.955 }, 00:10:18.955 "method": "bdev_nvme_attach_controller" 00:10:18.955 }' 00:10:18.955 [2024-07-24 18:47:03.871440] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:10:18.955 [2024-07-24 18:47:03.871507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383642 ] 00:10:18.955 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.955 [2024-07-24 18:47:03.953644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.214 [2024-07-24 18:47:04.043176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.472 Running I/O for 10 seconds... 00:10:29.453 00:10:29.453 Latency(us) 00:10:29.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.453 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:29.453 Verification LBA range: start 0x0 length 0x1000 00:10:29.453 Nvme1n1 : 10.02 4454.70 34.80 0.00 0.00 28650.78 614.40 37415.10 00:10:29.453 =================================================================================================================== 00:10:29.453 Total : 4454.70 34.80 0.00 0.00 28650.78 614.40 37415.10 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2385707 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:29.712 { 00:10:29.712 "params": { 00:10:29.712 "name": "Nvme$subsystem", 00:10:29.712 "trtype": "$TEST_TRANSPORT", 00:10:29.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.712 "adrfam": "ipv4", 00:10:29.712 "trsvcid": "$NVMF_PORT", 00:10:29.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.712 "hdgst": ${hdgst:-false}, 00:10:29.712 "ddgst": ${ddgst:-false} 00:10:29.712 }, 00:10:29.712 "method": "bdev_nvme_attach_controller" 00:10:29.712 } 00:10:29.712 EOF 00:10:29.712 )") 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:29.712 [2024-07-24 18:47:14.582131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.582182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:29.712 18:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:29.712 "params": { 00:10:29.712 "name": "Nvme1", 00:10:29.712 "trtype": "tcp", 00:10:29.712 "traddr": "10.0.0.2", 00:10:29.712 "adrfam": "ipv4", 00:10:29.712 "trsvcid": "4420", 00:10:29.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.712 "hdgst": false, 00:10:29.712 "ddgst": false 00:10:29.712 }, 00:10:29.712 "method": "bdev_nvme_attach_controller" 00:10:29.712 }' 00:10:29.712 [2024-07-24 18:47:14.594130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.594151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 [2024-07-24 18:47:14.602150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.602168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 [2024-07-24 18:47:14.614188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.614206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 [2024-07-24 18:47:14.626221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.626239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 [2024-07-24 18:47:14.626444] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:10:29.712 [2024-07-24 18:47:14.626508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385707 ] 00:10:29.712 [2024-07-24 18:47:14.638256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.638282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 [2024-07-24 18:47:14.650287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.650305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.712 [2024-07-24 18:47:14.662323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.662343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 [2024-07-24 18:47:14.674359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.674377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 [2024-07-24 18:47:14.686393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.686410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 [2024-07-24 18:47:14.698426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.698444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.712 [2024-07-24 18:47:14.709277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.712 [2024-07-24 18:47:14.710459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.712 [2024-07-24 18:47:14.710477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.971 [2024-07-24 18:47:14.722500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.971 [2024-07-24 18:47:14.722520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.971 [2024-07-24 18:47:14.734532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.971 [2024-07-24 18:47:14.734549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.971 [2024-07-24 18:47:14.746568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.971 [2024-07-24 18:47:14.746585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.971 [2024-07-24 18:47:14.758617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.971 [2024-07-24 18:47:14.758645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.971 [2024-07-24 18:47:14.770636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.971 [2024-07-24 18:47:14.770654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.971 [2024-07-24 18:47:14.782672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.971 [2024-07-24 18:47:14.782689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.971 [2024-07-24 18:47:14.794708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.971 [2024-07-24 18:47:14.794725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.795694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.972 [2024-07-24 18:47:14.806744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.806767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.818780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.818804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.830807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.830827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.842843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.842863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.854878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.854904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.866913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.866933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.878947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.878966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.890991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.891016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.903038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.903063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.915067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.915091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.927105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.927128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.939135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.939157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.951277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.951305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 Running I/O for 5 seconds... 00:10:29.972 [2024-07-24 18:47:14.963211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.963232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.972 [2024-07-24 18:47:14.976468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.972 [2024-07-24 18:47:14.976497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:14.988486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:14.988515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.002785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.002815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.021214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.021246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.034996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.035025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.052106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.052136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.070238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.070267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.089598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.089634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.108059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.108089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.127429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.127469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.147078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.147108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.165812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.165842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.184804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.184833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.201721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.201751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.214799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.214829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.231 [2024-07-24 18:47:15.227559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.231 [2024-07-24 18:47:15.227587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.240893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.240923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.255523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.255552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.272562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.272591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.292112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.292141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.310510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.310538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.328747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.328776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.347040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.347070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.365285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.365314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.384401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.384429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.402782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.402811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.421593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.421633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.439789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.439818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.457910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.457940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.476842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.476872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.489 [2024-07-24 18:47:15.493843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.489 [2024-07-24 18:47:15.493872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-07-24 18:47:15.506572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-07-24 18:47:15.506622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-07-24 18:47:15.521687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-07-24 18:47:15.521717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-07-24 18:47:15.535698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-07-24 18:47:15.535730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-07-24 18:47:15.553168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-07-24 18:47:15.553198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-07-24 18:47:15.570235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-07-24 18:47:15.570265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-07-24 18:47:15.589610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-07-24 18:47:15.589640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-07-24 18:47:15.607918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-07-24 18:47:15.607948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-07-24 18:47:15.626076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-07-24 18:47:15.626105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-07-24 18:47:15.644038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-07-24 18:47:15.644066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.747 [2024-07-24 18:47:15.663197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-07-24 18:47:15.663227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.747 [2024-07-24 18:47:15.681472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-07-24 18:47:15.681502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.747 [2024-07-24 18:47:15.700049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-07-24 18:47:15.700079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.747 [2024-07-24 18:47:15.717673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-07-24 18:47:15.717703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.747 [2024-07-24 18:47:15.730562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-07-24 18:47:15.730591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.747 [2024-07-24 18:47:15.746259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-07-24 18:47:15.746288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.763727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.763757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.782775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.782804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.801876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.801905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.819083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.819111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.838091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.838121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.856313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.856342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.875472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.875503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.892726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.892756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.910420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.910455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.928341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.928370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.940853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.940883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.955142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.955173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.972330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.972360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:15.990772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:15.990802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-07-24 18:47:16.009922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-07-24 18:47:16.009951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.028316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.028346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.046073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.046103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.063781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.063809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.082013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.082043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.100289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.100319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.119299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.119330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.131852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.131881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.146720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.146749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.161442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.161471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.175695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.175734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.190319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.190349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.204785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.204815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.221951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.221980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.241198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.241226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.258169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.258198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.262 [2024-07-24 18:47:16.270537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.262 [2024-07-24 18:47:16.270566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.285012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.285042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.298646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.298675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.316247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.316276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.333746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.333775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.346136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.346165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.360458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.360487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.374904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.374932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.389500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.389528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.404244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.404273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.418741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.418770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.433057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.433086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.450811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.450840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.469208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.469237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.488541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.488570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.506660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.506689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.521 [2024-07-24 18:47:16.525187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.521 [2024-07-24 18:47:16.525216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.544306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.544336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.561191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.561220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.573722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.573751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.588061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.588089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.605660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.605688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.623909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.623938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.642417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.642446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.660352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.660381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.673058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.673086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.688432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.688461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.702867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.702902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.717451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.717480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.731884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.731913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.749442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.749471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.768296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.768326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.780 [2024-07-24 18:47:16.787310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.780 [2024-07-24 18:47:16.787339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.805267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.805298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.823731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.823760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.843044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.843073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.862008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.862037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.881625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.881654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.899752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.899781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.917694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.917724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.937112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.937141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.956241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.956271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.974239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.974269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:16.993477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:16.993507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:17.011931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:17.011961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.039 [2024-07-24 18:47:17.031052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.039 [2024-07-24 18:47:17.031083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.049023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.049058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.067394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.067424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.085519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.085547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.103663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.103692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.123485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.123516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.140798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.140828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.158612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.158641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.176703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.176733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.194959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.194989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.213570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.213600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.232933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.232965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.298 [2024-07-24 18:47:17.252171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.298 [2024-07-24 18:47:17.252200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.299 [2024-07-24 18:47:17.265900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.299 [2024-07-24 18:47:17.265928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.299 [2024-07-24 18:47:17.284578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.299 [2024-07-24 18:47:17.284615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.299 [2024-07-24 18:47:17.302768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.299 [2024-07-24 18:47:17.302797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.557 [2024-07-24 18:47:17.320796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.557 [2024-07-24 18:47:17.320831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.557 [2024-07-24 18:47:17.338778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.557 [2024-07-24 18:47:17.338807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.557 [2024-07-24 18:47:17.357550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.557 [2024-07-24 18:47:17.357579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.557 [2024-07-24 18:47:17.375681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.375711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.558 [2024-07-24 18:47:17.394946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.394981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.558 [2024-07-24 18:47:17.413932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.413962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.558 [2024-07-24 18:47:17.433211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.433240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.558 [2024-07-24 18:47:17.450526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.450556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.558 [2024-07-24 18:47:17.470155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.470185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.558 [2024-07-24 18:47:17.488590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.488626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.558 [2024-07-24 18:47:17.507613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.507642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.558 [2024-07-24 18:47:17.524703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.524731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.558 [2024-07-24 18:47:17.543858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.543887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.558 [2024-07-24 18:47:17.561932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.558 [2024-07-24 18:47:17.561961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.575084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.575114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.587594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.587628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.604958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.604988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.624006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.624034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.643135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.643163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.661875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.661904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.681101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.681129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.699205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.699234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.711501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.711530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.726582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.726623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.744218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.744247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.761561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.761590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.779450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.779477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.798127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.798155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.816 [2024-07-24 18:47:17.815835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.816 [2024-07-24 18:47:17.815864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:17.835425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:17.835454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:17.855150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:17.855179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:17.873445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:17.873474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:17.892581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:17.892618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:17.910907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:17.910936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:17.930174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:17.930203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:17.946940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:17.946970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:17.965196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:17.965226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:17.983394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:17.983422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:17.995496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:17.995524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:18.009894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:18.009923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:18.022680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:18.022709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:18.035555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:18.035584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:18.053818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:18.053848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.075 [2024-07-24 18:47:18.072317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.075 [2024-07-24 18:47:18.072347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.091357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.091388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.108203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.108231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.127231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.127260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.145114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.145142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.163181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.163210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.182137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.182165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.200136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.200164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.219275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.219304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.237205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.237234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.256238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.256268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.273130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.273159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.292529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.292557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.309369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.309398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.321773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.321802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.333 [2024-07-24 18:47:18.335901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.333 [2024-07-24 18:47:18.335930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.353690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.353719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.372176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.372205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.391051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.391080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.408360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.408389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.427657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.427688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.444535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.444564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.457113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.457143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.472251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.472281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.486913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.486948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.504003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.504033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.521154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.521184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.539186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.539216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.558039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.558069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.591 [2024-07-24 18:47:18.577614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.591 [2024-07-24 18:47:18.577643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.592 [2024-07-24 18:47:18.594997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.592 [2024-07-24 18:47:18.595026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.614090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.614121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.630826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.630856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.649046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.649074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.667099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.667128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.685247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.685277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.703031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.703060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.722017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.722047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.740079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.740108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.752204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.752233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.766629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.766658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.783730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.783759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.803112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.803141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.821302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.821331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.850 [2024-07-24 18:47:18.840702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.850 [2024-07-24 18:47:18.840733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:18.860184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:18.860214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:18.878535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:18.878565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:18.896494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:18.896522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:18.914854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:18.914883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:18.934211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:18.934239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:18.953305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:18.953335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:18.970193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:18.970228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:18.989133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:18.989164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:19.007131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:19.007161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:19.025169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:19.025198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:19.044261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:19.044289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:19.061176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:19.061205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:19.073973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:19.074002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:19.089366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:19.089395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.109 [2024-07-24 18:47:19.101358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.109 [2024-07-24 18:47:19.101388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.119719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.119750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.138219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.138248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.157534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.157563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.174375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.174404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.186815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.186844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.200901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.200929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.218087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.218115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.236570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.236599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.254831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.254860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.273814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.273843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.292188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.292217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.310444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.310473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.328140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.328169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.346931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.346960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.367 [2024-07-24 18:47:19.366084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.367 [2024-07-24 18:47:19.366119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.626 [2024-07-24 18:47:19.384329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.384359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.403529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.403559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.422543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.422573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.439403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.439432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.457378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.457407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.475297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.475325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.493009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.493037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.510508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.510537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.528423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.528452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.547700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.547729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.565916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.565946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.583584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.583620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.601332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.601362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.627 [2024-07-24 18:47:19.619339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.627 [2024-07-24 18:47:19.619368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.636130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.636160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.654238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.654267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.666902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.666930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.682393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.682423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.701476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.701511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.719541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.719569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.736268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.736297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.755258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.755286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.773306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.773335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.792348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.792376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.810432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.810461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.828791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.828821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.848996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.849024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.866722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.866751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.886 [2024-07-24 18:47:19.884930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.886 [2024-07-24 18:47:19.884960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:19.902487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:19.902517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:19.921032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:19.921061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:19.938046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:19.938077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:19.950436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:19.950466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:19.964925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:19.964955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:19.983779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:19.983809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 00:10:35.145 Latency(us) 00:10:35.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.145 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:35.145 Nvme1n1 : 5.01 8735.23 68.24 0.00 0.00 14631.23 6374.87 23712.12 00:10:35.145 =================================================================================================================== 00:10:35.145 Total : 8735.23 68.24 0.00 0.00 14631.23 6374.87 23712.12 00:10:35.145 [2024-07-24 18:47:19.994146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:19.994175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.006178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.006203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.018334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.018375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.030252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.030277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.042291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.042317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.054320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.054341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.066351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.066373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.078383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.078403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.090444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.090471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.102480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.102502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.114510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.114529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.126544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.126564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.138583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.138611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.145 [2024-07-24 18:47:20.150617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.145 [2024-07-24 18:47:20.150635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.404 [2024-07-24 18:47:20.162654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.404 [2024-07-24 18:47:20.162673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.404 [2024-07-24 18:47:20.174693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.404 [2024-07-24 18:47:20.174714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.404 [2024-07-24 18:47:20.186722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.404 [2024-07-24 18:47:20.186740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.404 [2024-07-24 18:47:20.198756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.404 [2024-07-24 18:47:20.198773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2385707) - No such process 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2385707 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.404 delay0 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.404 18:47:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:35.404 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.404 [2024-07-24 18:47:20.382822] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:43.530 Initializing NVMe Controllers 00:10:43.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:43.530 Initialization complete. Launching workers. 00:10:43.530 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 293, failed: 7229 00:10:43.530 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7448, failed to submit 74 00:10:43.530 success 7332, unsuccess 116, failed 0 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.530 rmmod nvme_tcp 00:10:43.530 rmmod nvme_fabrics 00:10:43.530 rmmod nvme_keyring 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2383407 ']' 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2383407 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2383407 ']' 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2383407 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2383407 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2383407' 00:10:43.530 killing process with pid 2383407 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2383407 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2383407 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.530 18:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.910 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:44.910 00:10:44.910 real 0m33.230s 00:10:44.910 user 0m45.421s 00:10:44.910 sys 0m10.486s 00:10:44.910 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.910 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.910 ************************************ 00:10:44.910 END TEST nvmf_zcopy 00:10:44.910 ************************************ 00:10:44.910 18:47:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.910 18:47:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:44.910 18:47:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.910 18:47:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.910 ************************************ 00:10:44.910 START TEST nvmf_nmic 00:10:44.910 ************************************ 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.911 * Looking for test storage... 00:10:44.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.911 18:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.485 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:51.486 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:51.486 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:51.486 Found net devices under 0000:af:00.0: cvl_0_0 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:51.486 Found net devices under 0000:af:00.1: cvl_0_1 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:10:51.486 00:10:51.486 --- 10.0.0.2 ping statistics --- 00:10:51.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.486 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:10:51.486 00:10:51.486 --- 10.0.0.1 ping statistics --- 00:10:51.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.486 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2391532 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2391532 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2391532 ']' 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.486 18:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.486 [2024-07-24 18:47:35.986349] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:10:51.486 [2024-07-24 18:47:35.986404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.486 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.486 [2024-07-24 18:47:36.076013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.486 [2024-07-24 18:47:36.165999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.486 [2024-07-24 18:47:36.166045] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.486 [2024-07-24 18:47:36.166056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.486 [2024-07-24 18:47:36.166065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.486 [2024-07-24 18:47:36.166072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.486 [2024-07-24 18:47:36.166175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.486 [2024-07-24 18:47:36.166285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.487 [2024-07-24 18:47:36.166371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.487 [2024-07-24 18:47:36.166372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.054 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.054 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:52.054 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.054 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.054 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.054 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.054 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.054 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 [2024-07-24 18:47:36.990438] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 Malloc0 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 [2024-07-24 18:47:37.050499] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:52.054 test case1: single bdev can't be used in multiple subsystems 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.312 [2024-07-24 18:47:37.074364] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:52.312 [2024-07-24 18:47:37.074390] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:52.312 [2024-07-24 18:47:37.074400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.312 request: 00:10:52.312 { 00:10:52.312 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:52.312 "namespace": { 00:10:52.312 "bdev_name": "Malloc0", 00:10:52.312 "no_auto_visible": false 00:10:52.312 }, 00:10:52.312 "method": "nvmf_subsystem_add_ns", 00:10:52.312 "req_id": 1 00:10:52.312 } 00:10:52.312 Got JSON-RPC error response 00:10:52.312 response: 00:10:52.312 { 00:10:52.312 "code": -32602, 00:10:52.312 "message": "Invalid parameters" 00:10:52.312 } 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:52.312 Adding namespace failed - expected result. 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:52.312 test case2: host connect to nvmf target in multiple paths 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.312 [2024-07-24 18:47:37.086491] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.312 18:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.688 18:47:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:55.063 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.063 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # local i=0 00:10:55.063 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.063 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:55.063 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # sleep 2 00:10:56.974 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:56.974 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:56.974 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.974 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:56.974 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.974 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # return 0 00:10:56.974 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:56.974 [global] 00:10:56.974 thread=1 00:10:56.974 invalidate=1 00:10:56.974 rw=write 00:10:56.974 time_based=1 00:10:56.974 runtime=1 00:10:56.974 ioengine=libaio 00:10:56.974 direct=1 00:10:56.974 bs=4096 00:10:56.974 iodepth=1 00:10:56.974 norandommap=0 00:10:56.974 numjobs=1 00:10:56.974 00:10:56.974 verify_dump=1 00:10:56.974 verify_backlog=512 00:10:56.974 verify_state_save=0 00:10:56.974 do_verify=1 00:10:56.974 verify=crc32c-intel 00:10:56.975 [job0] 00:10:56.975 filename=/dev/nvme0n1 00:10:56.975 Could not set queue depth (nvme0n1) 00:10:57.233 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.233 fio-3.35 00:10:57.233 Starting 1 thread 00:10:58.608 00:10:58.608 job0: (groupid=0, jobs=1): err= 0: pid=2392776: Wed Jul 24 18:47:43 2024 00:10:58.608 read: IOPS=510, BW=2043KiB/s (2092kB/s)(2088KiB/1022msec) 00:10:58.608 slat (nsec): min=6758, max=21977, avg=7769.62, stdev=1945.43 00:10:58.608 clat (usec): min=377, max=42049, avg=1227.39, stdev=5659.06 00:10:58.608 lat (usec): min=384, max=42071, avg=1235.16, stdev=5660.83 00:10:58.608 clat percentiles (usec): 00:10:58.608 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 424], 20.00th=[ 429], 00:10:58.608 | 30.00th=[ 433], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 441], 00:10:58.608 | 70.00th=[ 441], 80.00th=[ 445], 90.00th=[ 449], 95.00th=[ 469], 00:10:58.608 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.608 | 99.99th=[42206] 00:10:58.608 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:10:58.608 slat (usec): min=9, max=26315, avg=37.03, stdev=822.02 00:10:58.608 clat (usec): min=284, max=533, avg=326.15, stdev=17.68 00:10:58.608 lat (usec): min=296, max=26691, avg=363.19, stdev=823.77 00:10:58.608 clat percentiles (usec): 00:10:58.608 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:10:58.608 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 322], 60.00th=[ 326], 00:10:58.608 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 338], 95.00th=[ 338], 00:10:58.608 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 453], 99.95th=[ 537], 00:10:58.608 | 99.99th=[ 537] 00:10:58.608 bw ( KiB/s): min= 2416, max= 5776, per=100.00%, avg=4096.00, stdev=2375.88, samples=2 00:10:58.608 iops : min= 604, max= 1444, avg=1024.00, stdev=593.97, samples=2 00:10:58.608 lat (usec) : 500=98.51%, 750=0.84% 00:10:58.608 lat (msec) : 50=0.65% 00:10:58.608 cpu : usr=1.08%, sys=1.37%, ctx=1550, majf=0, minf=2 00:10:58.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.608 issued rwts: total=522,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.608 00:10:58.608 Run status group 0 (all jobs): 00:10:58.608 READ: bw=2043KiB/s (2092kB/s), 2043KiB/s-2043KiB/s (2092kB/s-2092kB/s), io=2088KiB (2138kB), run=1022-1022msec 00:10:58.608 WRITE: bw=4008KiB/s (4104kB/s), 4008KiB/s-4008KiB/s (4104kB/s-4104kB/s), io=4096KiB (4194kB), run=1022-1022msec 00:10:58.608 00:10:58.608 Disk stats (read/write): 00:10:58.608 nvme0n1: ios=545/1024, merge=0/0, ticks=1503/332, in_queue=1835, util=98.80% 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1217 -- # local i=0 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # return 0 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.608 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:58.608 rmmod nvme_tcp 00:10:58.608 rmmod nvme_fabrics 00:10:58.608 rmmod nvme_keyring 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2391532 ']' 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2391532 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2391532 ']' 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2391532 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2391532 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2391532' 00:10:58.866 killing process with pid 2391532 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2391532 00:10:58.866 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2391532 00:10:59.124 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.125 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.125 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.125 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.125 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.125 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.125 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.125 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.025 18:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:01.025 00:11:01.025 real 0m16.297s 00:11:01.025 user 0m44.725s 00:11:01.025 sys 0m5.518s 00:11:01.025 18:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.025 18:47:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.025 ************************************ 00:11:01.025 END TEST nvmf_nmic 00:11:01.025 ************************************ 00:11:01.025 18:47:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:01.025 18:47:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:01.025 18:47:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.025 18:47:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 ************************************ 00:11:01.284 START TEST nvmf_fio_target 00:11:01.284 ************************************ 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:01.284 * Looking for test storage... 00:11:01.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.284 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:01.285 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.856 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:07.857 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:07.857 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:07.857 Found net devices under 0000:af:00.0: cvl_0_0 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:07.857 Found net devices under 0000:af:00.1: cvl_0_1 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:07.857 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:07.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:11:07.857 00:11:07.857 --- 10.0.0.2 ping statistics --- 00:11:07.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.857 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:11:07.857 00:11:07.857 --- 10.0.0.1 ping statistics --- 00:11:07.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.857 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2396750 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2396750 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2396750 ']' 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.857 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.857 [2024-07-24 18:47:52.195587] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:11:07.857 [2024-07-24 18:47:52.195661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.857 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.857 [2024-07-24 18:47:52.283910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.857 [2024-07-24 18:47:52.375397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.857 [2024-07-24 18:47:52.375439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.858 [2024-07-24 18:47:52.375450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.858 [2024-07-24 18:47:52.375459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.858 [2024-07-24 18:47:52.375467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.858 [2024-07-24 18:47:52.375522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.858 [2024-07-24 18:47:52.375668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.858 [2024-07-24 18:47:52.375704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.858 [2024-07-24 18:47:52.375704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.424 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.424 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:08.424 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.424 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:08.424 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.424 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.424 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:08.424 [2024-07-24 18:47:53.409846] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.683 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.941 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:08.942 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.200 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:09.200 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.201 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:09.201 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.770 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:09.770 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:09.770 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.029 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:10.029 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.288 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:10.288 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.288 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:10.288 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:10.547 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:10.806 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:10.806 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.065 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:11.065 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:11.324 18:47:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.583 [2024-07-24 18:47:56.449239] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.583 18:47:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:11.842 18:47:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:12.101 18:47:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.480 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:13.480 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local i=0 00:11:13.480 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.480 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # [[ -n 4 ]] 00:11:13.480 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # nvme_device_counter=4 00:11:13.480 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # sleep 2 00:11:15.383 18:48:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:15.383 18:48:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:15.383 18:48:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.383 18:48:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_devices=4 00:11:15.383 18:48:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.383 18:48:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # return 0 00:11:15.383 18:48:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:15.383 [global] 00:11:15.383 thread=1 00:11:15.383 invalidate=1 00:11:15.383 rw=write 00:11:15.383 time_based=1 00:11:15.383 runtime=1 00:11:15.383 ioengine=libaio 00:11:15.383 direct=1 00:11:15.383 bs=4096 00:11:15.383 iodepth=1 00:11:15.383 norandommap=0 00:11:15.383 numjobs=1 00:11:15.383 00:11:15.383 verify_dump=1 00:11:15.383 verify_backlog=512 00:11:15.383 verify_state_save=0 00:11:15.383 do_verify=1 00:11:15.383 verify=crc32c-intel 00:11:15.383 [job0] 00:11:15.383 filename=/dev/nvme0n1 00:11:15.383 [job1] 00:11:15.383 filename=/dev/nvme0n2 00:11:15.383 [job2] 00:11:15.383 filename=/dev/nvme0n3 00:11:15.383 [job3] 00:11:15.383 filename=/dev/nvme0n4 00:11:15.383 Could not set queue depth (nvme0n1) 00:11:15.383 Could not set queue depth (nvme0n2) 00:11:15.383 Could not set queue depth (nvme0n3) 00:11:15.383 Could not set queue depth (nvme0n4) 00:11:15.961 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.961 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.961 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.961 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.961 fio-3.35 00:11:15.961 Starting 4 threads 00:11:17.338 00:11:17.338 job0: (groupid=0, jobs=1): err= 0: pid=2398569: Wed Jul 24 18:48:01 2024 00:11:17.338 read: IOPS=18, BW=73.5KiB/s (75.3kB/s)(76.0KiB/1034msec) 00:11:17.338 slat (nsec): min=10260, max=27346, avg=21880.47, stdev=3287.47 00:11:17.339 clat (usec): min=40831, max=41917, avg=41077.16, stdev=293.53 00:11:17.339 lat (usec): min=40853, max=41944, avg=41099.04, stdev=294.40 00:11:17.339 clat percentiles (usec): 00:11:17.339 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:17.339 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:17.339 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:11:17.339 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:17.339 | 99.99th=[41681] 00:11:17.339 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:11:17.339 slat (nsec): min=10305, max=66208, avg=12543.97, stdev=3651.35 00:11:17.339 clat (usec): min=339, max=801, avg=477.14, stdev=59.81 00:11:17.339 lat (usec): min=350, max=828, avg=489.68, stdev=59.89 00:11:17.339 clat percentiles (usec): 00:11:17.339 | 1.00th=[ 396], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 424], 00:11:17.339 | 30.00th=[ 433], 40.00th=[ 441], 50.00th=[ 474], 60.00th=[ 494], 00:11:17.339 | 70.00th=[ 506], 80.00th=[ 519], 90.00th=[ 545], 95.00th=[ 570], 00:11:17.339 | 99.00th=[ 685], 99.50th=[ 734], 99.90th=[ 799], 99.95th=[ 799], 00:11:17.339 | 99.99th=[ 799] 00:11:17.339 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:11:17.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:17.339 lat (usec) : 500=61.21%, 750=34.84%, 1000=0.38% 00:11:17.339 lat (msec) : 50=3.58% 00:11:17.339 cpu : usr=0.77%, sys=0.68%, ctx=532, majf=0, minf=1 00:11:17.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.339 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.339 job1: (groupid=0, jobs=1): err= 0: pid=2398573: Wed Jul 24 18:48:01 2024 00:11:17.339 read: IOPS=20, BW=82.5KiB/s (84.5kB/s)(84.0KiB/1018msec) 00:11:17.339 slat (nsec): min=10087, max=24018, avg=20794.62, stdev=3233.90 00:11:17.339 clat (usec): min=40857, max=41816, avg=41041.03, stdev=235.77 00:11:17.339 lat (usec): min=40878, max=41839, avg=41061.82, stdev=234.83 00:11:17.339 clat percentiles (usec): 00:11:17.339 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:17.339 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:17.339 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:17.339 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:17.339 | 99.99th=[41681] 00:11:17.339 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:11:17.339 slat (usec): min=10, max=122, avg=12.90, stdev= 8.12 00:11:17.339 clat (usec): min=135, max=490, avg=288.15, stdev=35.68 00:11:17.339 lat (usec): min=225, max=502, avg=301.05, stdev=35.04 00:11:17.339 clat percentiles (usec): 00:11:17.339 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 260], 00:11:17.339 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:11:17.339 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 347], 00:11:17.339 | 99.00th=[ 367], 99.50th=[ 433], 99.90th=[ 490], 99.95th=[ 490], 00:11:17.339 | 99.99th=[ 490] 00:11:17.339 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:11:17.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:17.339 lat (usec) : 250=8.82%, 500=87.24% 00:11:17.339 lat (msec) : 50=3.94% 00:11:17.339 cpu : usr=0.49%, sys=0.79%, ctx=533, majf=0, minf=1 00:11:17.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.339 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.339 job2: (groupid=0, jobs=1): err= 0: pid=2398576: Wed Jul 24 18:48:01 2024 00:11:17.339 read: IOPS=18, BW=73.3KiB/s (75.0kB/s)(76.0KiB/1037msec) 00:11:17.339 slat (nsec): min=9715, max=23641, avg=22075.79, stdev=3039.54 00:11:17.339 clat (usec): min=40835, max=42062, avg=41168.98, stdev=423.82 00:11:17.339 lat (usec): min=40851, max=42085, avg=41191.05, stdev=424.53 00:11:17.339 clat percentiles (usec): 00:11:17.339 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:11:17.339 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:17.339 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:17.339 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:17.339 | 99.99th=[42206] 00:11:17.339 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:11:17.339 slat (nsec): min=9577, max=36038, avg=12036.65, stdev=4758.48 00:11:17.339 clat (usec): min=294, max=768, avg=482.68, stdev=69.94 00:11:17.339 lat (usec): min=305, max=783, avg=494.71, stdev=71.42 00:11:17.339 clat percentiles (usec): 00:11:17.339 | 1.00th=[ 375], 5.00th=[ 416], 10.00th=[ 420], 20.00th=[ 429], 00:11:17.339 | 30.00th=[ 433], 40.00th=[ 441], 50.00th=[ 478], 60.00th=[ 498], 00:11:17.339 | 70.00th=[ 506], 80.00th=[ 523], 90.00th=[ 545], 95.00th=[ 644], 00:11:17.339 | 99.00th=[ 725], 99.50th=[ 742], 99.90th=[ 766], 99.95th=[ 766], 00:11:17.339 | 99.99th=[ 766] 00:11:17.339 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:11:17.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:17.339 lat (usec) : 500=61.39%, 750=34.65%, 1000=0.38% 00:11:17.339 lat (msec) : 50=3.58% 00:11:17.339 cpu : usr=0.39%, sys=0.48%, ctx=531, majf=0, minf=2 00:11:17.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.339 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.339 job3: (groupid=0, jobs=1): err= 0: pid=2398577: Wed Jul 24 18:48:01 2024 00:11:17.339 read: IOPS=24, BW=98.1KiB/s (100kB/s)(100KiB/1019msec) 00:11:17.339 slat (nsec): min=8609, max=23741, avg=19486.76, stdev=5027.95 00:11:17.339 clat (usec): min=373, max=41341, avg=34478.41, stdev=15167.35 00:11:17.339 lat (usec): min=394, max=41352, avg=34497.90, stdev=15168.97 00:11:17.339 clat percentiles (usec): 00:11:17.339 | 1.00th=[ 375], 5.00th=[ 412], 10.00th=[ 457], 20.00th=[40633], 00:11:17.339 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:17.339 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:17.339 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:17.339 | 99.99th=[41157] 00:11:17.339 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:11:17.339 slat (nsec): min=10710, max=44305, avg=12275.12, stdev=2012.02 00:11:17.339 clat (usec): min=232, max=557, avg=290.58, stdev=33.10 00:11:17.339 lat (usec): min=244, max=602, avg=302.86, stdev=33.51 00:11:17.339 clat percentiles (usec): 00:11:17.339 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:11:17.339 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:11:17.339 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 343], 00:11:17.339 | 99.00th=[ 363], 99.50th=[ 367], 99.90th=[ 562], 99.95th=[ 562], 00:11:17.339 | 99.99th=[ 562] 00:11:17.339 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:11:17.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:17.339 lat (usec) : 250=5.59%, 500=90.32%, 750=0.19% 00:11:17.339 lat (msec) : 50=3.91% 00:11:17.339 cpu : usr=0.39%, sys=0.98%, ctx=537, majf=0, minf=1 00:11:17.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.339 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.339 00:11:17.339 Run status group 0 (all jobs): 00:11:17.339 READ: bw=324KiB/s (332kB/s), 73.3KiB/s-98.1KiB/s (75.0kB/s-100kB/s), io=336KiB (344kB), run=1018-1037msec 00:11:17.339 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2012KiB/s (2022kB/s-2060kB/s), io=8192KiB (8389kB), run=1018-1037msec 00:11:17.339 00:11:17.339 Disk stats (read/write): 00:11:17.339 nvme0n1: ios=64/512, merge=0/0, ticks=647/236, in_queue=883, util=88.78% 00:11:17.339 nvme0n2: ios=53/512, merge=0/0, ticks=704/141, in_queue=845, util=88.10% 00:11:17.339 nvme0n3: ios=14/512, merge=0/0, ticks=576/247, in_queue=823, util=88.92% 00:11:17.339 nvme0n4: ios=20/512, merge=0/0, ticks=658/141, in_queue=799, util=89.67% 00:11:17.339 18:48:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:17.339 [global] 00:11:17.339 thread=1 00:11:17.339 invalidate=1 00:11:17.339 rw=randwrite 00:11:17.339 time_based=1 00:11:17.339 runtime=1 00:11:17.339 ioengine=libaio 00:11:17.339 direct=1 00:11:17.339 bs=4096 00:11:17.339 iodepth=1 00:11:17.339 norandommap=0 00:11:17.339 numjobs=1 00:11:17.339 00:11:17.339 verify_dump=1 00:11:17.339 verify_backlog=512 00:11:17.339 verify_state_save=0 00:11:17.339 do_verify=1 00:11:17.339 verify=crc32c-intel 00:11:17.339 [job0] 00:11:17.339 filename=/dev/nvme0n1 00:11:17.339 [job1] 00:11:17.339 filename=/dev/nvme0n2 00:11:17.339 [job2] 00:11:17.339 filename=/dev/nvme0n3 00:11:17.339 [job3] 00:11:17.339 filename=/dev/nvme0n4 00:11:17.339 Could not set queue depth (nvme0n1) 00:11:17.339 Could not set queue depth (nvme0n2) 00:11:17.339 Could not set queue depth (nvme0n3) 00:11:17.339 Could not set queue depth (nvme0n4) 00:11:17.597 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.597 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.597 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.597 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.597 fio-3.35 00:11:17.597 Starting 4 threads 00:11:19.003 00:11:19.003 job0: (groupid=0, jobs=1): err= 0: pid=2399075: Wed Jul 24 18:48:03 2024 00:11:19.003 read: IOPS=21, BW=84.5KiB/s (86.6kB/s)(88.0KiB/1041msec) 00:11:19.003 slat (nsec): min=10041, max=23102, avg=21533.45, stdev=3422.32 00:11:19.003 clat (usec): min=40867, max=41146, avg=40971.67, stdev=67.61 00:11:19.003 lat (usec): min=40885, max=41168, avg=40993.20, stdev=67.88 00:11:19.003 clat percentiles (usec): 00:11:19.003 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:19.003 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:19.003 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:19.003 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:19.003 | 99.99th=[41157] 00:11:19.003 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:11:19.003 slat (nsec): min=8942, max=66326, avg=11732.51, stdev=2870.94 00:11:19.003 clat (usec): min=186, max=336, avg=256.08, stdev=22.37 00:11:19.003 lat (usec): min=195, max=392, avg=267.81, stdev=22.91 00:11:19.003 clat percentiles (usec): 00:11:19.003 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 239], 00:11:19.003 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 00:11:19.003 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 302], 00:11:19.003 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 338], 99.95th=[ 338], 00:11:19.003 | 99.99th=[ 338] 00:11:19.003 bw ( KiB/s): min= 4096, max= 4096, per=38.47%, avg=4096.00, stdev= 0.00, samples=1 00:11:19.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:19.003 lat (usec) : 250=38.58%, 500=57.30% 00:11:19.003 lat (msec) : 50=4.12% 00:11:19.003 cpu : usr=0.29%, sys=0.67%, ctx=535, majf=0, minf=1 00:11:19.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.003 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.003 job1: (groupid=0, jobs=1): err= 0: pid=2399077: Wed Jul 24 18:48:03 2024 00:11:19.003 read: IOPS=38, BW=153KiB/s (157kB/s)(156KiB/1018msec) 00:11:19.003 slat (nsec): min=8656, max=23114, avg=19677.67, stdev=5425.78 00:11:19.003 clat (usec): min=445, max=41824, avg=19265.81, stdev=20422.70 00:11:19.003 lat (usec): min=467, max=41847, avg=19285.49, stdev=20425.24 00:11:19.003 clat percentiles (usec): 00:11:19.003 | 1.00th=[ 445], 5.00th=[ 445], 10.00th=[ 478], 20.00th=[ 644], 00:11:19.003 | 30.00th=[ 652], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[40633], 00:11:19.003 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:19.003 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:19.003 | 99.99th=[41681] 00:11:19.003 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:11:19.003 slat (nsec): min=9319, max=43807, avg=13149.14, stdev=4218.65 00:11:19.003 clat (usec): min=271, max=848, avg=498.08, stdev=74.15 00:11:19.003 lat (usec): min=281, max=860, avg=511.23, stdev=75.30 00:11:19.003 clat percentiles (usec): 00:11:19.003 | 1.00th=[ 375], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 437], 00:11:19.003 | 30.00th=[ 449], 40.00th=[ 482], 50.00th=[ 494], 60.00th=[ 510], 00:11:19.003 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 660], 00:11:19.003 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 848], 99.95th=[ 848], 00:11:19.003 | 99.99th=[ 848] 00:11:19.003 bw ( KiB/s): min= 4096, max= 4096, per=38.47%, avg=4096.00, stdev= 0.00, samples=1 00:11:19.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:19.003 lat (usec) : 500=50.45%, 750=44.83%, 1000=1.45% 00:11:19.003 lat (msec) : 50=3.27% 00:11:19.003 cpu : usr=0.39%, sys=0.59%, ctx=553, majf=0, minf=1 00:11:19.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.003 issued rwts: total=39,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.003 job2: (groupid=0, jobs=1): err= 0: pid=2399085: Wed Jul 24 18:48:03 2024 00:11:19.003 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:19.003 slat (usec): min=7, max=110, avg= 9.02, stdev= 3.42 00:11:19.003 clat (usec): min=400, max=1672, avg=465.50, stdev=75.97 00:11:19.003 lat (usec): min=409, max=1693, avg=474.52, stdev=76.38 00:11:19.003 clat percentiles (usec): 00:11:19.003 | 1.00th=[ 412], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 441], 00:11:19.003 | 30.00th=[ 445], 40.00th=[ 449], 50.00th=[ 453], 60.00th=[ 461], 00:11:19.003 | 70.00th=[ 465], 80.00th=[ 469], 90.00th=[ 490], 95.00th=[ 515], 00:11:19.003 | 99.00th=[ 644], 99.50th=[ 701], 99.90th=[ 1532], 99.95th=[ 1680], 00:11:19.003 | 99.99th=[ 1680] 00:11:19.003 write: IOPS=1233, BW=4935KiB/s (5054kB/s)(4940KiB/1001msec); 0 zone resets 00:11:19.003 slat (nsec): min=10666, max=67520, avg=12347.21, stdev=2552.09 00:11:19.003 clat (usec): min=287, max=748, avg=396.79, stdev=93.34 00:11:19.003 lat (usec): min=305, max=760, avg=409.14, stdev=93.57 00:11:19.003 clat percentiles (usec): 00:11:19.003 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 318], 00:11:19.003 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 416], 00:11:19.003 | 70.00th=[ 457], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 545], 00:11:19.003 | 99.00th=[ 685], 99.50th=[ 717], 99.90th=[ 742], 99.95th=[ 750], 00:11:19.003 | 99.99th=[ 750] 00:11:19.003 bw ( KiB/s): min= 4096, max= 4096, per=38.47%, avg=4096.00, stdev= 0.00, samples=1 00:11:19.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:19.003 lat (usec) : 500=85.08%, 750=14.74% 00:11:19.003 lat (msec) : 2=0.18% 00:11:19.003 cpu : usr=1.90%, sys=3.80%, ctx=2260, majf=0, minf=1 00:11:19.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.003 issued rwts: total=1024,1235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.003 job3: (groupid=0, jobs=1): err= 0: pid=2399086: Wed Jul 24 18:48:03 2024 00:11:19.003 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1017msec) 00:11:19.003 slat (nsec): min=9262, max=24198, avg=22903.65, stdev=3249.58 00:11:19.003 clat (usec): min=40859, max=41165, avg=40973.95, stdev=63.15 00:11:19.003 lat (usec): min=40883, max=41188, avg=40996.86, stdev=63.11 00:11:19.003 clat percentiles (usec): 00:11:19.003 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:19.003 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:19.003 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:19.003 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:19.003 | 99.99th=[41157] 00:11:19.003 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:11:19.003 slat (nsec): min=9435, max=39163, avg=10702.97, stdev=1996.16 00:11:19.003 clat (usec): min=316, max=462, avg=367.41, stdev=20.57 00:11:19.003 lat (usec): min=327, max=502, avg=378.11, stdev=20.98 00:11:19.003 clat percentiles (usec): 00:11:19.003 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 351], 00:11:19.003 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 371], 00:11:19.003 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 396], 95.00th=[ 404], 00:11:19.003 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 465], 99.95th=[ 465], 00:11:19.003 | 99.99th=[ 465] 00:11:19.003 bw ( KiB/s): min= 4096, max= 4096, per=38.47%, avg=4096.00, stdev= 0.00, samples=1 00:11:19.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:19.003 lat (usec) : 500=96.24% 00:11:19.003 lat (msec) : 50=3.76% 00:11:19.004 cpu : usr=0.30%, sys=0.59%, ctx=533, majf=0, minf=2 00:11:19.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.004 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.004 00:11:19.004 Run status group 0 (all jobs): 00:11:19.004 READ: bw=4246KiB/s (4348kB/s), 78.7KiB/s-4092KiB/s (80.6kB/s-4190kB/s), io=4420KiB (4526kB), run=1001-1041msec 00:11:19.004 WRITE: bw=10.4MiB/s (10.9MB/s), 1967KiB/s-4935KiB/s (2015kB/s-5054kB/s), io=10.8MiB (11.3MB), run=1001-1041msec 00:11:19.004 00:11:19.004 Disk stats (read/write): 00:11:19.004 nvme0n1: ios=67/512, merge=0/0, ticks=731/128, in_queue=859, util=87.68% 00:11:19.004 nvme0n2: ios=60/512, merge=0/0, ticks=1574/250, in_queue=1824, util=98.17% 00:11:19.004 nvme0n3: ios=904/1024, merge=0/0, ticks=900/410, in_queue=1310, util=99.90% 00:11:19.004 nvme0n4: ios=56/512, merge=0/0, ticks=1106/182, in_queue=1288, util=98.43% 00:11:19.004 18:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:19.004 [global] 00:11:19.004 thread=1 00:11:19.004 invalidate=1 00:11:19.004 rw=write 00:11:19.004 time_based=1 00:11:19.004 runtime=1 00:11:19.004 ioengine=libaio 00:11:19.004 direct=1 00:11:19.004 bs=4096 00:11:19.004 iodepth=128 00:11:19.004 norandommap=0 00:11:19.004 numjobs=1 00:11:19.004 00:11:19.004 verify_dump=1 00:11:19.004 verify_backlog=512 00:11:19.004 verify_state_save=0 00:11:19.004 do_verify=1 00:11:19.004 verify=crc32c-intel 00:11:19.004 [job0] 00:11:19.004 filename=/dev/nvme0n1 00:11:19.004 [job1] 00:11:19.004 filename=/dev/nvme0n2 00:11:19.004 [job2] 00:11:19.004 filename=/dev/nvme0n3 00:11:19.004 [job3] 00:11:19.004 filename=/dev/nvme0n4 00:11:19.004 Could not set queue depth (nvme0n1) 00:11:19.004 Could not set queue depth (nvme0n2) 00:11:19.004 Could not set queue depth (nvme0n3) 00:11:19.004 Could not set queue depth (nvme0n4) 00:11:19.004 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.004 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.004 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.004 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.004 fio-3.35 00:11:19.004 Starting 4 threads 00:11:20.383 00:11:20.383 job0: (groupid=0, jobs=1): err= 0: pid=2399507: Wed Jul 24 18:48:05 2024 00:11:20.383 read: IOPS=2578, BW=10.1MiB/s (10.6MB/s)(10.3MiB/1023msec) 00:11:20.383 slat (nsec): min=1947, max=13990k, avg=152420.67, stdev=966024.31 00:11:20.383 clat (usec): min=6129, max=59343, avg=17576.89, stdev=7537.84 00:11:20.383 lat (usec): min=6135, max=59353, avg=17729.31, stdev=7619.61 00:11:20.383 clat percentiles (usec): 00:11:20.383 | 1.00th=[ 8979], 5.00th=[12125], 10.00th=[12387], 20.00th=[13960], 00:11:20.383 | 30.00th=[14091], 40.00th=[15401], 50.00th=[15926], 60.00th=[16712], 00:11:20.383 | 70.00th=[17171], 80.00th=[17695], 90.00th=[23987], 95.00th=[35390], 00:11:20.383 | 99.00th=[53740], 99.50th=[55837], 99.90th=[59507], 99.95th=[59507], 00:11:20.383 | 99.99th=[59507] 00:11:20.383 write: IOPS=3002, BW=11.7MiB/s (12.3MB/s)(12.0MiB/1023msec); 0 zone resets 00:11:20.383 slat (usec): min=3, max=12592, avg=179.15, stdev=915.82 00:11:20.383 clat (usec): min=1024, max=87867, avg=27159.56, stdev=20540.77 00:11:20.383 lat (usec): min=1036, max=87878, avg=27338.71, stdev=20664.64 00:11:20.383 clat percentiles (usec): 00:11:20.383 | 1.00th=[ 1582], 5.00th=[ 5276], 10.00th=[ 8455], 20.00th=[11207], 00:11:20.383 | 30.00th=[12387], 40.00th=[13829], 50.00th=[15270], 60.00th=[29492], 00:11:20.383 | 70.00th=[33817], 80.00th=[46924], 90.00th=[59507], 95.00th=[66847], 00:11:20.383 | 99.00th=[85459], 99.50th=[86508], 99.90th=[87557], 99.95th=[87557], 00:11:20.383 | 99.99th=[87557] 00:11:20.384 bw ( KiB/s): min=11888, max=12263, per=29.37%, avg=12075.50, stdev=265.17, samples=2 00:11:20.384 iops : min= 2972, max= 3065, avg=3018.50, stdev=65.76, samples=2 00:11:20.384 lat (msec) : 2=0.58%, 4=1.38%, 10=7.04%, 20=60.40%, 50=20.28% 00:11:20.384 lat (msec) : 100=10.32% 00:11:20.384 cpu : usr=3.62%, sys=3.52%, ctx=328, majf=0, minf=1 00:11:20.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:20.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.384 issued rwts: total=2638,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.384 job1: (groupid=0, jobs=1): err= 0: pid=2399510: Wed Jul 24 18:48:05 2024 00:11:20.384 read: IOPS=2119, BW=8479KiB/s (8682kB/s)(8572KiB/1011msec) 00:11:20.384 slat (nsec): min=1738, max=18398k, avg=204836.51, stdev=1286541.37 00:11:20.384 clat (msec): min=8, max=110, avg=25.08, stdev=18.22 00:11:20.384 lat (msec): min=8, max=110, avg=25.29, stdev=18.36 00:11:20.384 clat percentiles (msec): 00:11:20.384 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 15], 00:11:20.384 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 18], 60.00th=[ 21], 00:11:20.384 | 70.00th=[ 25], 80.00th=[ 27], 90.00th=[ 47], 95.00th=[ 72], 00:11:20.384 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 111], 99.95th=[ 111], 00:11:20.384 | 99.99th=[ 111] 00:11:20.384 write: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec); 0 zone resets 00:11:20.384 slat (usec): min=3, max=44591, avg=186.17, stdev=1537.53 00:11:20.384 clat (msec): min=3, max=110, avg=27.90, stdev=16.90 00:11:20.384 lat (msec): min=3, max=110, avg=28.08, stdev=17.01 00:11:20.384 clat percentiles (msec): 00:11:20.384 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 13], 00:11:20.384 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 24], 60.00th=[ 28], 00:11:20.384 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 49], 95.00th=[ 59], 00:11:20.384 | 99.00th=[ 90], 99.50th=[ 97], 99.90th=[ 100], 99.95th=[ 111], 00:11:20.384 | 99.99th=[ 111] 00:11:20.384 bw ( KiB/s): min= 9416, max=10800, per=24.59%, avg=10108.00, stdev=978.64, samples=2 00:11:20.384 iops : min= 2354, max= 2700, avg=2527.00, stdev=244.66, samples=2 00:11:20.384 lat (msec) : 4=1.45%, 10=3.59%, 20=43.91%, 50=42.14%, 100=8.08% 00:11:20.384 lat (msec) : 250=0.83% 00:11:20.384 cpu : usr=1.88%, sys=3.47%, ctx=229, majf=0, minf=1 00:11:20.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:20.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.384 issued rwts: total=2143,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.384 job2: (groupid=0, jobs=1): err= 0: pid=2399511: Wed Jul 24 18:48:05 2024 00:11:20.384 read: IOPS=1998, BW=7992KiB/s (8184kB/s)(8192KiB/1025msec) 00:11:20.384 slat (nsec): min=1951, max=27167k, avg=250171.38, stdev=1838890.86 00:11:20.384 clat (usec): min=10391, max=56365, avg=30566.65, stdev=7535.76 00:11:20.384 lat (usec): min=10397, max=56397, avg=30816.82, stdev=7676.29 00:11:20.384 clat percentiles (usec): 00:11:20.384 | 1.00th=[12649], 5.00th=[22676], 10.00th=[23987], 20.00th=[24773], 00:11:20.384 | 30.00th=[27657], 40.00th=[28443], 50.00th=[29230], 60.00th=[29754], 00:11:20.384 | 70.00th=[30278], 80.00th=[35914], 90.00th=[41681], 95.00th=[47449], 00:11:20.384 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55313], 99.95th=[55837], 00:11:20.384 | 99.99th=[56361] 00:11:20.384 write: IOPS=2370, BW=9483KiB/s (9711kB/s)(9720KiB/1025msec); 0 zone resets 00:11:20.384 slat (usec): min=3, max=24629, avg=193.66, stdev=1042.35 00:11:20.384 clat (usec): min=3556, max=55005, avg=27654.49, stdev=6572.06 00:11:20.384 lat (usec): min=3565, max=55034, avg=27848.16, stdev=6661.93 00:11:20.384 clat percentiles (usec): 00:11:20.384 | 1.00th=[ 6390], 5.00th=[13566], 10.00th=[19530], 20.00th=[24511], 00:11:20.384 | 30.00th=[27132], 40.00th=[28181], 50.00th=[29230], 60.00th=[30278], 00:11:20.384 | 70.00th=[30540], 80.00th=[31065], 90.00th=[31327], 95.00th=[33817], 00:11:20.384 | 99.00th=[47973], 99.50th=[50070], 99.90th=[54789], 99.95th=[54789], 00:11:20.384 | 99.99th=[54789] 00:11:20.384 bw ( KiB/s): min= 9160, max= 9264, per=22.41%, avg=9212.00, stdev=73.54, samples=2 00:11:20.384 iops : min= 2290, max= 2316, avg=2303.00, stdev=18.38, samples=2 00:11:20.384 lat (msec) : 4=0.29%, 10=1.12%, 20=4.87%, 50=91.96%, 100=1.76% 00:11:20.384 cpu : usr=1.95%, sys=3.03%, ctx=292, majf=0, minf=1 00:11:20.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:20.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.384 issued rwts: total=2048,2430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.384 job3: (groupid=0, jobs=1): err= 0: pid=2399512: Wed Jul 24 18:48:05 2024 00:11:20.384 read: IOPS=2003, BW=8016KiB/s (8208kB/s)(8192KiB/1022msec) 00:11:20.384 slat (usec): min=2, max=18875, avg=212.74, stdev=1301.43 00:11:20.384 clat (usec): min=6887, max=78519, avg=25263.26, stdev=11208.33 00:11:20.384 lat (usec): min=6894, max=80437, avg=25476.01, stdev=11327.27 00:11:20.384 clat percentiles (usec): 00:11:20.384 | 1.00th=[11731], 5.00th=[15139], 10.00th=[19006], 20.00th=[19006], 00:11:20.384 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20317], 60.00th=[22152], 00:11:20.384 | 70.00th=[24511], 80.00th=[32113], 90.00th=[38536], 95.00th=[44827], 00:11:20.384 | 99.00th=[74974], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:11:20.384 | 99.99th=[78119] 00:11:20.384 write: IOPS=2419, BW=9679KiB/s (9911kB/s)(9892KiB/1022msec); 0 zone resets 00:11:20.384 slat (usec): min=3, max=36344, avg=220.47, stdev=1560.90 00:11:20.384 clat (usec): min=1588, max=128101, avg=31415.63, stdev=24433.19 00:11:20.384 lat (usec): min=1601, max=128114, avg=31636.11, stdev=24570.87 00:11:20.384 clat percentiles (msec): 00:11:20.384 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 19], 00:11:20.384 | 30.00th=[ 20], 40.00th=[ 20], 50.00th=[ 21], 60.00th=[ 21], 00:11:20.384 | 70.00th=[ 31], 80.00th=[ 36], 90.00th=[ 68], 95.00th=[ 92], 00:11:20.384 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:11:20.384 | 99.99th=[ 129] 00:11:20.384 bw ( KiB/s): min= 8192, max=10568, per=22.82%, avg=9380.00, stdev=1680.09, samples=2 00:11:20.384 iops : min= 2048, max= 2642, avg=2345.00, stdev=420.02, samples=2 00:11:20.384 lat (msec) : 2=0.04%, 10=1.55%, 20=44.77%, 50=42.53%, 100=8.96% 00:11:20.384 lat (msec) : 250=2.15% 00:11:20.384 cpu : usr=2.64%, sys=2.94%, ctx=284, majf=0, minf=1 00:11:20.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:20.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.384 issued rwts: total=2048,2473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.384 00:11:20.384 Run status group 0 (all jobs): 00:11:20.384 READ: bw=33.8MiB/s (35.5MB/s), 7992KiB/s-10.1MiB/s (8184kB/s-10.6MB/s), io=34.7MiB (36.4MB), run=1011-1025msec 00:11:20.384 WRITE: bw=40.1MiB/s (42.1MB/s), 9483KiB/s-11.7MiB/s (9711kB/s-12.3MB/s), io=41.2MiB (43.2MB), run=1011-1025msec 00:11:20.384 00:11:20.384 Disk stats (read/write): 00:11:20.384 nvme0n1: ios=2465/2560, merge=0/0, ticks=35032/64839, in_queue=99871, util=87.88% 00:11:20.384 nvme0n2: ios=1821/2048, merge=0/0, ticks=29414/49162, in_queue=78576, util=98.48% 00:11:20.384 nvme0n3: ios=1682/2048, merge=0/0, ticks=51093/55335, in_queue=106428, util=99.06% 00:11:20.384 nvme0n4: ios=1629/2048, merge=0/0, ticks=26189/37366, in_queue=63555, util=96.23% 00:11:20.384 18:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:20.384 [global] 00:11:20.384 thread=1 00:11:20.384 invalidate=1 00:11:20.384 rw=randwrite 00:11:20.384 time_based=1 00:11:20.384 runtime=1 00:11:20.384 ioengine=libaio 00:11:20.384 direct=1 00:11:20.384 bs=4096 00:11:20.384 iodepth=128 00:11:20.384 norandommap=0 00:11:20.384 numjobs=1 00:11:20.384 00:11:20.384 verify_dump=1 00:11:20.384 verify_backlog=512 00:11:20.384 verify_state_save=0 00:11:20.384 do_verify=1 00:11:20.384 verify=crc32c-intel 00:11:20.384 [job0] 00:11:20.384 filename=/dev/nvme0n1 00:11:20.384 [job1] 00:11:20.384 filename=/dev/nvme0n2 00:11:20.384 [job2] 00:11:20.384 filename=/dev/nvme0n3 00:11:20.384 [job3] 00:11:20.384 filename=/dev/nvme0n4 00:11:20.384 Could not set queue depth (nvme0n1) 00:11:20.384 Could not set queue depth (nvme0n2) 00:11:20.384 Could not set queue depth (nvme0n3) 00:11:20.384 Could not set queue depth (nvme0n4) 00:11:20.642 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.642 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.642 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.642 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.642 fio-3.35 00:11:20.642 Starting 4 threads 00:11:22.020 00:11:22.020 job0: (groupid=0, jobs=1): err= 0: pid=2399929: Wed Jul 24 18:48:06 2024 00:11:22.020 read: IOPS=2141, BW=8567KiB/s (8773kB/s)(8696KiB/1015msec) 00:11:22.020 slat (nsec): min=1905, max=24266k, avg=238767.63, stdev=1756209.81 00:11:22.020 clat (usec): min=3525, max=56487, avg=29071.80, stdev=6757.11 00:11:22.020 lat (usec): min=9555, max=56494, avg=29310.57, stdev=6863.59 00:11:22.020 clat percentiles (usec): 00:11:22.020 | 1.00th=[13566], 5.00th=[24511], 10.00th=[24773], 20.00th=[25560], 00:11:22.020 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[28181], 00:11:22.020 | 70.00th=[28967], 80.00th=[31589], 90.00th=[39060], 95.00th=[44303], 00:11:22.020 | 99.00th=[49021], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:11:22.020 | 99.99th=[56361] 00:11:22.020 write: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec); 0 zone resets 00:11:22.020 slat (usec): min=3, max=23723, avg=184.76, stdev=1273.06 00:11:22.020 clat (usec): min=5258, max=52124, avg=25536.87, stdev=5356.91 00:11:22.020 lat (usec): min=5269, max=52145, avg=25721.63, stdev=5493.05 00:11:22.020 clat percentiles (usec): 00:11:22.020 | 1.00th=[ 7046], 5.00th=[13435], 10.00th=[17433], 20.00th=[23725], 00:11:22.020 | 30.00th=[25560], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:11:22.020 | 70.00th=[27395], 80.00th=[28181], 90.00th=[29754], 95.00th=[31589], 00:11:22.020 | 99.00th=[34866], 99.50th=[44827], 99.90th=[51643], 99.95th=[51643], 00:11:22.020 | 99.99th=[52167] 00:11:22.020 bw ( KiB/s): min=10096, max=10368, per=23.83%, avg=10232.00, stdev=192.33, samples=2 00:11:22.020 iops : min= 2524, max= 2592, avg=2558.00, stdev=48.08, samples=2 00:11:22.020 lat (msec) : 4=0.02%, 10=1.42%, 20=6.68%, 50=91.38%, 100=0.51% 00:11:22.020 cpu : usr=2.86%, sys=2.96%, ctx=270, majf=0, minf=1 00:11:22.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:22.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.020 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.020 job1: (groupid=0, jobs=1): err= 0: pid=2399930: Wed Jul 24 18:48:06 2024 00:11:22.020 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:11:22.020 slat (usec): min=2, max=13312, avg=122.86, stdev=680.99 00:11:22.020 clat (usec): min=10399, max=94420, avg=17121.16, stdev=9613.82 00:11:22.020 lat (usec): min=10406, max=95136, avg=17244.01, stdev=9620.15 00:11:22.020 clat percentiles (usec): 00:11:22.020 | 1.00th=[10945], 5.00th=[12125], 10.00th=[13829], 20.00th=[14353], 00:11:22.020 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15270], 60.00th=[15533], 00:11:22.020 | 70.00th=[16450], 80.00th=[17171], 90.00th=[18744], 95.00th=[22938], 00:11:22.020 | 99.00th=[89654], 99.50th=[92799], 99.90th=[92799], 99.95th=[94897], 00:11:22.020 | 99.99th=[94897] 00:11:22.020 write: IOPS=4037, BW=15.8MiB/s (16.5MB/s)(15.9MiB/1005msec); 0 zone resets 00:11:22.020 slat (usec): min=3, max=16621, avg=131.19, stdev=818.09 00:11:22.020 clat (usec): min=1663, max=93776, avg=16237.37, stdev=8533.89 00:11:22.020 lat (usec): min=1678, max=93786, avg=16368.56, stdev=8630.39 00:11:22.020 clat percentiles (usec): 00:11:22.020 | 1.00th=[ 8455], 5.00th=[12256], 10.00th=[13829], 20.00th=[14222], 00:11:22.020 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[14877], 00:11:22.020 | 70.00th=[15270], 80.00th=[16581], 90.00th=[17433], 95.00th=[19268], 00:11:22.020 | 99.00th=[76022], 99.50th=[88605], 99.90th=[92799], 99.95th=[93848], 00:11:22.020 | 99.99th=[93848] 00:11:22.020 bw ( KiB/s): min=13032, max=18416, per=36.63%, avg=15724.00, stdev=3807.06, samples=2 00:11:22.020 iops : min= 3258, max= 4604, avg=3931.00, stdev=951.77, samples=2 00:11:22.020 lat (msec) : 2=0.03%, 4=0.01%, 10=0.64%, 20=93.59%, 50=4.07% 00:11:22.020 lat (msec) : 100=1.66% 00:11:22.020 cpu : usr=4.18%, sys=4.68%, ctx=446, majf=0, minf=1 00:11:22.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:22.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.020 issued rwts: total=3584,4058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.020 job2: (groupid=0, jobs=1): err= 0: pid=2399931: Wed Jul 24 18:48:06 2024 00:11:22.020 read: IOPS=1761, BW=7046KiB/s (7215kB/s)(7152KiB/1015msec) 00:11:22.020 slat (nsec): min=1734, max=23830k, avg=233104.20, stdev=1443831.82 00:11:22.020 clat (usec): min=4637, max=98883, avg=25651.49, stdev=12514.97 00:11:22.020 lat (usec): min=10462, max=98891, avg=25884.60, stdev=12652.87 00:11:22.020 clat percentiles (usec): 00:11:22.020 | 1.00th=[12649], 5.00th=[16712], 10.00th=[17171], 20.00th=[19268], 00:11:22.020 | 30.00th=[20055], 40.00th=[20579], 50.00th=[20841], 60.00th=[22676], 00:11:22.020 | 70.00th=[23462], 80.00th=[30802], 90.00th=[37487], 95.00th=[51119], 00:11:22.020 | 99.00th=[87557], 99.50th=[90702], 99.90th=[99091], 99.95th=[99091], 00:11:22.020 | 99.99th=[99091] 00:11:22.020 write: IOPS=2017, BW=8071KiB/s (8265kB/s)(8192KiB/1015msec); 0 zone resets 00:11:22.020 slat (usec): min=2, max=14652, avg=281.37, stdev=1166.39 00:11:22.020 clat (msec): min=7, max=102, avg=40.18, stdev=24.59 00:11:22.020 lat (msec): min=7, max=102, avg=40.46, stdev=24.76 00:11:22.020 clat percentiles (msec): 00:11:22.020 | 1.00th=[ 13], 5.00th=[ 19], 10.00th=[ 22], 20.00th=[ 23], 00:11:22.020 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 26], 60.00th=[ 38], 00:11:22.020 | 70.00th=[ 46], 80.00th=[ 65], 90.00th=[ 85], 95.00th=[ 93], 00:11:22.020 | 99.00th=[ 103], 99.50th=[ 103], 99.90th=[ 103], 99.95th=[ 103], 00:11:22.020 | 99.99th=[ 103] 00:11:22.020 bw ( KiB/s): min= 6680, max= 9704, per=19.08%, avg=8192.00, stdev=2138.29, samples=2 00:11:22.020 iops : min= 1670, max= 2426, avg=2048.00, stdev=534.57, samples=2 00:11:22.020 lat (msec) : 10=0.18%, 20=17.26%, 50=65.38%, 100=16.06%, 250=1.12% 00:11:22.020 cpu : usr=2.27%, sys=2.27%, ctx=312, majf=0, minf=1 00:11:22.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:22.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.020 issued rwts: total=1788,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.020 job3: (groupid=0, jobs=1): err= 0: pid=2399932: Wed Jul 24 18:48:06 2024 00:11:22.020 read: IOPS=2017, BW=8071KiB/s (8265kB/s)(8192KiB/1015msec) 00:11:22.020 slat (usec): min=3, max=15733, avg=203.16, stdev=1382.30 00:11:22.020 clat (usec): min=14040, max=67423, avg=27502.65, stdev=5618.66 00:11:22.020 lat (usec): min=14046, max=67430, avg=27705.81, stdev=5746.95 00:11:22.020 clat percentiles (usec): 00:11:22.020 | 1.00th=[14091], 5.00th=[17433], 10.00th=[22152], 20.00th=[23725], 00:11:22.020 | 30.00th=[24773], 40.00th=[25822], 50.00th=[26608], 60.00th=[27132], 00:11:22.020 | 70.00th=[30016], 80.00th=[32900], 90.00th=[33424], 95.00th=[38011], 00:11:22.020 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:11:22.020 | 99.99th=[67634] 00:11:22.020 write: IOPS=2195, BW=8780KiB/s (8991kB/s)(8912KiB/1015msec); 0 zone resets 00:11:22.020 slat (usec): min=3, max=25968, avg=242.97, stdev=1549.53 00:11:22.020 clat (msec): min=2, max=106, avg=32.14, stdev=19.22 00:11:22.020 lat (msec): min=6, max=106, avg=32.39, stdev=19.35 00:11:22.020 clat percentiles (msec): 00:11:22.020 | 1.00th=[ 12], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 18], 00:11:22.020 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 30], 60.00th=[ 33], 00:11:22.020 | 70.00th=[ 34], 80.00th=[ 40], 90.00th=[ 50], 95.00th=[ 80], 00:11:22.020 | 99.00th=[ 107], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 107], 00:11:22.020 | 99.99th=[ 107] 00:11:22.020 bw ( KiB/s): min= 6040, max=10768, per=19.58%, avg=8404.00, stdev=3343.20, samples=2 00:11:22.020 iops : min= 1510, max= 2692, avg=2101.00, stdev=835.80, samples=2 00:11:22.020 lat (msec) : 4=0.02%, 10=0.35%, 20=15.20%, 50=79.28%, 100=3.48% 00:11:22.020 lat (msec) : 250=1.66% 00:11:22.020 cpu : usr=2.27%, sys=3.35%, ctx=161, majf=0, minf=1 00:11:22.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:11:22.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.020 issued rwts: total=2048,2228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.020 00:11:22.020 Run status group 0 (all jobs): 00:11:22.020 READ: bw=36.9MiB/s (38.7MB/s), 7046KiB/s-13.9MiB/s (7215kB/s-14.6MB/s), io=37.5MiB (39.3MB), run=1005-1015msec 00:11:22.020 WRITE: bw=41.9MiB/s (44.0MB/s), 8071KiB/s-15.8MiB/s (8265kB/s-16.5MB/s), io=42.6MiB (44.6MB), run=1005-1015msec 00:11:22.020 00:11:22.020 Disk stats (read/write): 00:11:22.020 nvme0n1: ios=1774/2048, merge=0/0, ticks=51132/51821, in_queue=102953, util=97.19% 00:11:22.020 nvme0n2: ios=3047/3072, merge=0/0, ticks=18436/23038, in_queue=41474, util=93.78% 00:11:22.020 nvme0n3: ios=1580/1815, merge=0/0, ticks=29220/55093, in_queue=84313, util=96.82% 00:11:22.020 nvme0n4: ios=1542/1999, merge=0/0, ticks=23635/33578, in_queue=57213, util=89.58% 00:11:22.020 18:48:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:22.020 18:48:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2400190 00:11:22.020 18:48:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:22.021 18:48:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:22.021 [global] 00:11:22.021 thread=1 00:11:22.021 invalidate=1 00:11:22.021 rw=read 00:11:22.021 time_based=1 00:11:22.021 runtime=10 00:11:22.021 ioengine=libaio 00:11:22.021 direct=1 00:11:22.021 bs=4096 00:11:22.021 iodepth=1 00:11:22.021 norandommap=1 00:11:22.021 numjobs=1 00:11:22.021 00:11:22.021 [job0] 00:11:22.021 filename=/dev/nvme0n1 00:11:22.021 [job1] 00:11:22.021 filename=/dev/nvme0n2 00:11:22.021 [job2] 00:11:22.021 filename=/dev/nvme0n3 00:11:22.021 [job3] 00:11:22.021 filename=/dev/nvme0n4 00:11:22.021 Could not set queue depth (nvme0n1) 00:11:22.021 Could not set queue depth (nvme0n2) 00:11:22.021 Could not set queue depth (nvme0n3) 00:11:22.021 Could not set queue depth (nvme0n4) 00:11:22.279 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.279 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.279 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.279 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.279 fio-3.35 00:11:22.279 Starting 4 threads 00:11:25.564 18:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:25.564 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=4091904, buflen=4096 00:11:25.564 fio: pid=2400373, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:25.564 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:25.564 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.564 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:25.564 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=315392, buflen=4096 00:11:25.564 fio: pid=2400372, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:25.823 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=23969792, buflen=4096 00:11:25.823 fio: pid=2400358, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:25.823 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.823 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:26.081 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.081 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:26.081 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=499712, buflen=4096 00:11:26.081 fio: pid=2400365, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:26.081 00:11:26.081 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2400358: Wed Jul 24 18:48:10 2024 00:11:26.081 read: IOPS=1808, BW=7231KiB/s (7405kB/s)(22.9MiB/3237msec) 00:11:26.081 slat (usec): min=5, max=13805, avg=10.17, stdev=180.37 00:11:26.081 clat (usec): min=265, max=41979, avg=537.99, stdev=1307.42 00:11:26.081 lat (usec): min=273, max=41998, avg=548.16, stdev=1320.25 00:11:26.081 clat percentiles (usec): 00:11:26.081 | 1.00th=[ 338], 5.00th=[ 371], 10.00th=[ 404], 20.00th=[ 453], 00:11:26.081 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 494], 00:11:26.081 | 70.00th=[ 502], 80.00th=[ 529], 90.00th=[ 594], 95.00th=[ 644], 00:11:26.081 | 99.00th=[ 709], 99.50th=[ 922], 99.90th=[38536], 99.95th=[41681], 00:11:26.081 | 99.99th=[42206] 00:11:26.081 bw ( KiB/s): min= 5926, max= 8024, per=88.69%, avg=7239.67, stdev=790.68, samples=6 00:11:26.081 iops : min= 1481, max= 2006, avg=1809.83, stdev=197.84, samples=6 00:11:26.081 lat (usec) : 500=66.27%, 750=32.92%, 1000=0.34% 00:11:26.081 lat (msec) : 2=0.32%, 4=0.02%, 50=0.10% 00:11:26.081 cpu : usr=0.65%, sys=1.61%, ctx=5856, majf=0, minf=1 00:11:26.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.081 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.081 issued rwts: total=5853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.081 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2400365: Wed Jul 24 18:48:10 2024 00:11:26.081 read: IOPS=35, BW=141KiB/s (145kB/s)(488KiB/3455msec) 00:11:26.081 slat (usec): min=6, max=15854, avg=227.32, stdev=1558.29 00:11:26.081 clat (usec): min=406, max=42345, avg=27916.86, stdev=19135.76 00:11:26.081 lat (usec): min=413, max=57239, avg=28145.87, stdev=19357.34 00:11:26.081 clat percentiles (usec): 00:11:26.081 | 1.00th=[ 412], 5.00th=[ 449], 10.00th=[ 465], 20.00th=[ 578], 00:11:26.081 | 30.00th=[ 996], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:11:26.081 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:26.081 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:26.082 | 99.99th=[42206] 00:11:26.082 bw ( KiB/s): min= 96, max= 279, per=1.74%, avg=142.50, stdev=73.72, samples=6 00:11:26.082 iops : min= 24, max= 69, avg=35.50, stdev=18.15, samples=6 00:11:26.082 lat (usec) : 500=17.07%, 750=8.94%, 1000=4.88% 00:11:26.082 lat (msec) : 2=0.81%, 4=0.81%, 50=66.67% 00:11:26.082 cpu : usr=0.03%, sys=0.06%, ctx=127, majf=0, minf=1 00:11:26.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.082 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.082 issued rwts: total=123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.082 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2400372: Wed Jul 24 18:48:10 2024 00:11:26.082 read: IOPS=26, BW=103KiB/s (105kB/s)(308KiB/2994msec) 00:11:26.082 slat (nsec): min=9478, max=28877, avg=22209.04, stdev=3432.40 00:11:26.082 clat (usec): min=564, max=42865, avg=38570.50, stdev=10059.53 00:11:26.082 lat (usec): min=586, max=42887, avg=38592.71, stdev=10059.04 00:11:26.082 clat percentiles (usec): 00:11:26.082 | 1.00th=[ 562], 5.00th=[ 627], 10.00th=[40633], 20.00th=[41157], 00:11:26.082 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:26.082 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:11:26.082 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:26.082 | 99.99th=[42730] 00:11:26.082 bw ( KiB/s): min= 96, max= 112, per=1.27%, avg=104.00, stdev= 8.00, samples=5 00:11:26.082 iops : min= 24, max= 28, avg=26.00, stdev= 2.00, samples=5 00:11:26.082 lat (usec) : 750=5.13% 00:11:26.082 lat (msec) : 2=1.28%, 50=92.31% 00:11:26.082 cpu : usr=0.00%, sys=0.13%, ctx=80, majf=0, minf=1 00:11:26.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.082 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.082 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.082 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2400373: Wed Jul 24 18:48:10 2024 00:11:26.082 read: IOPS=368, BW=1471KiB/s (1506kB/s)(3996KiB/2717msec) 00:11:26.082 slat (nsec): min=6466, max=39917, avg=8441.62, stdev=4131.15 00:11:26.082 clat (usec): min=418, max=42929, avg=2688.46, stdev=8953.43 00:11:26.082 lat (usec): min=425, max=42951, avg=2696.91, stdev=8956.37 00:11:26.082 clat percentiles (usec): 00:11:26.082 | 1.00th=[ 437], 5.00th=[ 465], 10.00th=[ 494], 20.00th=[ 570], 00:11:26.082 | 30.00th=[ 578], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 603], 00:11:26.082 | 70.00th=[ 635], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[40633], 00:11:26.082 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:11:26.082 | 99.99th=[42730] 00:11:26.082 bw ( KiB/s): min= 104, max= 2672, per=17.40%, avg=1420.80, stdev=1274.46, samples=5 00:11:26.082 iops : min= 26, max= 668, avg=355.20, stdev=318.61, samples=5 00:11:26.082 lat (usec) : 500=11.10%, 750=82.20%, 1000=1.30% 00:11:26.082 lat (msec) : 2=0.10%, 50=5.20% 00:11:26.082 cpu : usr=0.04%, sys=0.44%, ctx=1000, majf=0, minf=2 00:11:26.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.082 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.082 issued rwts: total=1000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.082 00:11:26.082 Run status group 0 (all jobs): 00:11:26.082 READ: bw=8162KiB/s (8358kB/s), 103KiB/s-7231KiB/s (105kB/s-7405kB/s), io=27.5MiB (28.9MB), run=2717-3455msec 00:11:26.082 00:11:26.082 Disk stats (read/write): 00:11:26.082 nvme0n1: ios=5581/0, merge=0/0, ticks=3026/0, in_queue=3026, util=95.32% 00:11:26.082 nvme0n2: ios=120/0, merge=0/0, ticks=3322/0, in_queue=3322, util=95.66% 00:11:26.082 nvme0n3: ios=115/0, merge=0/0, ticks=3751/0, in_queue=3751, util=98.82% 00:11:26.082 nvme0n4: ios=891/0, merge=0/0, ticks=2585/0, in_queue=2585, util=96.45% 00:11:26.340 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.340 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:26.599 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.599 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:26.858 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.858 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:27.116 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.116 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2400190 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1217 -- # local i=0 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # return 0 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:27.375 nvmf hotplug test: fio failed as expected 00:11:27.375 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.633 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.633 rmmod nvme_tcp 00:11:27.892 rmmod nvme_fabrics 00:11:27.892 rmmod nvme_keyring 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2396750 ']' 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2396750 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2396750 ']' 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2396750 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2396750 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2396750' 00:11:27.892 killing process with pid 2396750 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2396750 00:11:27.892 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2396750 00:11:28.151 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:28.151 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:28.151 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:28.151 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.151 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:28.151 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.151 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.151 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.059 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:30.059 00:11:30.059 real 0m28.942s 00:11:30.059 user 2m25.490s 00:11:30.059 sys 0m8.181s 00:11:30.059 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.059 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.059 ************************************ 00:11:30.059 END TEST nvmf_fio_target 00:11:30.059 ************************************ 00:11:30.059 18:48:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:30.059 18:48:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:30.059 18:48:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.059 18:48:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:30.319 ************************************ 00:11:30.319 START TEST nvmf_bdevio 00:11:30.319 ************************************ 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:30.319 * Looking for test storage... 00:11:30.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:30.319 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:30.320 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:30.320 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:36.893 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.893 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:36.894 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:36.894 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:36.894 Found net devices under 0000:af:00.0: cvl_0_0 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:36.894 Found net devices under 0000:af:00.1: cvl_0_1 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.894 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:36.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:11:36.894 00:11:36.894 --- 10.0.0.2 ping statistics --- 00:11:36.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.894 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:11:36.894 00:11:36.894 --- 10.0.0.1 ping statistics --- 00:11:36.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.894 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:36.894 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2405528 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2405528 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2405528 ']' 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.895 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:36.895 [2024-07-24 18:48:21.236712] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:11:36.895 [2024-07-24 18:48:21.236767] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.895 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.895 [2024-07-24 18:48:21.355066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.895 [2024-07-24 18:48:21.503586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.895 [2024-07-24 18:48:21.503660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.895 [2024-07-24 18:48:21.503682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.895 [2024-07-24 18:48:21.503700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.895 [2024-07-24 18:48:21.503716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.895 [2024-07-24 18:48:21.503856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:36.895 [2024-07-24 18:48:21.503967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:36.895 [2024-07-24 18:48:21.504080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:36.895 [2024-07-24 18:48:21.504085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.182 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.182 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:37.182 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:37.182 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:37.182 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.442 [2024-07-24 18:48:22.226324] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.442 Malloc0 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.442 [2024-07-24 18:48:22.290747] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:37.442 { 00:11:37.442 "params": { 00:11:37.442 "name": "Nvme$subsystem", 00:11:37.442 "trtype": "$TEST_TRANSPORT", 00:11:37.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.442 "adrfam": "ipv4", 00:11:37.442 "trsvcid": "$NVMF_PORT", 00:11:37.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.442 "hdgst": ${hdgst:-false}, 00:11:37.442 "ddgst": ${ddgst:-false} 00:11:37.442 }, 00:11:37.442 "method": "bdev_nvme_attach_controller" 00:11:37.442 } 00:11:37.442 EOF 00:11:37.442 )") 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:37.442 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:37.442 "params": { 00:11:37.442 "name": "Nvme1", 00:11:37.442 "trtype": "tcp", 00:11:37.442 "traddr": "10.0.0.2", 00:11:37.442 "adrfam": "ipv4", 00:11:37.442 "trsvcid": "4420", 00:11:37.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:37.442 "hdgst": false, 00:11:37.442 "ddgst": false 00:11:37.442 }, 00:11:37.442 "method": "bdev_nvme_attach_controller" 00:11:37.442 }' 00:11:37.442 [2024-07-24 18:48:22.342956] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:11:37.442 [2024-07-24 18:48:22.343017] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405665 ] 00:11:37.442 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.443 [2024-07-24 18:48:22.425917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:37.701 [2024-07-24 18:48:22.516153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.701 [2024-07-24 18:48:22.516265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.702 [2024-07-24 18:48:22.516266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.960 I/O targets: 00:11:37.960 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:37.960 00:11:37.960 00:11:37.960 CUnit - A unit testing framework for C - Version 2.1-3 00:11:37.960 http://cunit.sourceforge.net/ 00:11:37.960 00:11:37.960 00:11:37.960 Suite: bdevio tests on: Nvme1n1 00:11:37.960 Test: blockdev write read block ...passed 00:11:37.960 Test: blockdev write zeroes read block ...passed 00:11:37.960 Test: blockdev write zeroes read no split ...passed 00:11:38.219 Test: blockdev write zeroes read split ...passed 00:11:38.219 Test: blockdev write zeroes read split partial ...passed 00:11:38.219 Test: blockdev reset ...[2024-07-24 18:48:23.006069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:38.219 [2024-07-24 18:48:23.006151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa57c80 (9): Bad file descriptor 00:11:38.219 [2024-07-24 18:48:23.019524] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:38.219 passed 00:11:38.219 Test: blockdev write read 8 blocks ...passed 00:11:38.219 Test: blockdev write read size > 128k ...passed 00:11:38.219 Test: blockdev write read invalid size ...passed 00:11:38.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:38.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:38.219 Test: blockdev write read max offset ...passed 00:11:38.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:38.219 Test: blockdev writev readv 8 blocks ...passed 00:11:38.478 Test: blockdev writev readv 30 x 1block ...passed 00:11:38.478 Test: blockdev writev readv block ...passed 00:11:38.478 Test: blockdev writev readv size > 128k ...passed 00:11:38.478 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:38.478 Test: blockdev comparev and writev ...[2024-07-24 18:48:23.279553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.478 [2024-07-24 18:48:23.279627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:38.478 [2024-07-24 18:48:23.279671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.478 [2024-07-24 18:48:23.279694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:38.478 [2024-07-24 18:48:23.280325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.478 [2024-07-24 18:48:23.280358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:38.478 [2024-07-24 18:48:23.280396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.478 [2024-07-24 18:48:23.280418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:38.478 [2024-07-24 18:48:23.281056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.478 [2024-07-24 18:48:23.281099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:38.478 [2024-07-24 18:48:23.281137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.478 [2024-07-24 18:48:23.281159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:38.478 [2024-07-24 18:48:23.281761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.478 [2024-07-24 18:48:23.281792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:38.478 [2024-07-24 18:48:23.281830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.478 [2024-07-24 18:48:23.281854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:38.478 passed 00:11:38.478 Test: blockdev nvme passthru rw ...passed 00:11:38.478 Test: blockdev nvme passthru vendor specific ...[2024-07-24 18:48:23.364281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:38.478 [2024-07-24 18:48:23.364322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:38.478 [2024-07-24 18:48:23.364616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:38.478 [2024-07-24 18:48:23.364647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:38.478 [2024-07-24 18:48:23.364917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:38.478 [2024-07-24 18:48:23.364946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:38.478 [2024-07-24 18:48:23.365227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:38.478 [2024-07-24 18:48:23.365257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:38.478 passed 00:11:38.478 Test: blockdev nvme admin passthru ...passed 00:11:38.478 Test: blockdev copy ...passed 00:11:38.478 00:11:38.478 Run Summary: Type Total Ran Passed Failed Inactive 00:11:38.478 suites 1 1 n/a 0 0 00:11:38.478 tests 23 23 23 0 0 00:11:38.478 asserts 152 152 152 0 n/a 00:11:38.478 00:11:38.478 Elapsed time = 1.150 seconds 00:11:38.737 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.737 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.737 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.737 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.737 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:38.737 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:38.737 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:38.737 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:38.738 rmmod nvme_tcp 00:11:38.738 rmmod nvme_fabrics 00:11:38.738 rmmod nvme_keyring 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2405528 ']' 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2405528 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2405528 ']' 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2405528 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2405528 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2405528' 00:11:38.738 killing process with pid 2405528 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2405528 00:11:38.738 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2405528 00:11:39.307 18:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:39.307 18:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:39.307 18:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:39.307 18:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.307 18:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:39.307 18:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.307 18:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.307 18:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.213 18:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.213 00:11:41.213 real 0m11.068s 00:11:41.213 user 0m14.314s 00:11:41.213 sys 0m5.126s 00:11:41.213 18:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.213 18:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.213 ************************************ 00:11:41.213 END TEST nvmf_bdevio 00:11:41.213 ************************************ 00:11:41.213 18:48:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:41.213 00:11:41.214 real 4m53.228s 00:11:41.214 user 11m56.469s 00:11:41.214 sys 1m37.245s 00:11:41.214 18:48:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.214 18:48:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:41.214 ************************************ 00:11:41.214 END TEST nvmf_target_core 00:11:41.214 ************************************ 00:11:41.214 18:48:26 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:41.214 18:48:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.214 18:48:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.214 18:48:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.474 ************************************ 00:11:41.474 START TEST nvmf_target_extra 00:11:41.474 ************************************ 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:41.474 * Looking for test storage... 00:11:41.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.474 ************************************ 00:11:41.474 START TEST nvmf_example 00:11:41.474 ************************************ 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:41.474 * Looking for test storage... 00:11:41.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.474 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.734 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:41.735 18:48:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:48.307 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:48.308 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:48.308 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:48.308 Found net devices under 0000:af:00.0: cvl_0_0 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:48.308 Found net devices under 0000:af:00.1: cvl_0_1 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:48.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:11:48.308 00:11:48.308 --- 10.0.0.2 ping statistics --- 00:11:48.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.308 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:48.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:11:48.308 00:11:48.308 --- 10.0.0.1 ping statistics --- 00:11:48.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.308 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2409597 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2409597 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2409597 ']' 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.308 18:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.308 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.569 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.569 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:11:48.569 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:48.569 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.569 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.569 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:48.570 18:48:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:48.830 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.810 Initializing NVMe Controllers 00:11:58.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:58.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:58.810 Initialization complete. Launching workers. 00:11:58.810 ======================================================== 00:11:58.810 Latency(us) 00:11:58.810 Device Information : IOPS MiB/s Average min max 00:11:58.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10868.63 42.46 5888.42 1086.42 20238.75 00:11:58.810 ======================================================== 00:11:58.810 Total : 10868.63 42.46 5888.42 1086.42 20238.75 00:11:58.810 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:59.069 rmmod nvme_tcp 00:11:59.069 rmmod nvme_fabrics 00:11:59.069 rmmod nvme_keyring 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2409597 ']' 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2409597 00:11:59.069 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2409597 ']' 00:11:59.070 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2409597 00:11:59.070 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:11:59.070 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.070 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2409597 00:11:59.070 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:11:59.070 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:11:59.070 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2409597' 00:11:59.070 killing process with pid 2409597 00:11:59.070 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 2409597 00:11:59.070 18:48:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 2409597 00:11:59.329 nvmf threads initialize successfully 00:11:59.329 bdev subsystem init successfully 00:11:59.329 created a nvmf target service 00:11:59.329 create targets's poll groups done 00:11:59.329 all subsystems of target started 00:11:59.329 nvmf target is running 00:11:59.329 all subsystems of target stopped 00:11:59.329 destroy targets's poll groups done 00:11:59.329 destroyed the nvmf target service 00:11:59.329 bdev subsystem finish successfully 00:11:59.329 nvmf threads destroy successfully 00:11:59.329 18:48:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:59.329 18:48:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:59.329 18:48:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:59.329 18:48:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.329 18:48:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:59.329 18:48:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.329 18:48:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.329 18:48:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.274 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.274 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:01.274 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:01.274 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.536 00:12:01.536 real 0m19.890s 00:12:01.536 user 0m47.041s 00:12:01.536 sys 0m5.777s 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.536 ************************************ 00:12:01.536 END TEST nvmf_example 00:12:01.536 ************************************ 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.536 ************************************ 00:12:01.536 START TEST nvmf_filesystem 00:12:01.536 ************************************ 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:01.536 * Looking for test storage... 00:12:01.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:01.536 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:01.537 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:01.537 #define SPDK_CONFIG_H 00:12:01.537 #define SPDK_CONFIG_APPS 1 00:12:01.537 #define SPDK_CONFIG_ARCH native 00:12:01.537 #undef SPDK_CONFIG_ASAN 00:12:01.537 #undef SPDK_CONFIG_AVAHI 00:12:01.537 #undef SPDK_CONFIG_CET 00:12:01.537 #define SPDK_CONFIG_COVERAGE 1 00:12:01.537 #define SPDK_CONFIG_CROSS_PREFIX 00:12:01.537 #undef SPDK_CONFIG_CRYPTO 00:12:01.537 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:01.537 #undef SPDK_CONFIG_CUSTOMOCF 00:12:01.537 #undef SPDK_CONFIG_DAOS 00:12:01.537 #define SPDK_CONFIG_DAOS_DIR 00:12:01.537 #define SPDK_CONFIG_DEBUG 1 00:12:01.537 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:01.537 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:01.537 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:01.537 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:01.537 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:01.537 #undef SPDK_CONFIG_DPDK_UADK 00:12:01.537 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:01.537 #define SPDK_CONFIG_EXAMPLES 1 00:12:01.537 #undef SPDK_CONFIG_FC 00:12:01.537 #define SPDK_CONFIG_FC_PATH 00:12:01.537 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:01.537 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:01.537 #undef SPDK_CONFIG_FUSE 00:12:01.537 #undef SPDK_CONFIG_FUZZER 00:12:01.537 #define SPDK_CONFIG_FUZZER_LIB 00:12:01.537 #undef SPDK_CONFIG_GOLANG 00:12:01.537 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:01.537 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:01.537 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:01.537 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:01.537 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:01.537 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:01.537 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:01.537 #define SPDK_CONFIG_IDXD 1 00:12:01.537 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:01.537 #undef SPDK_CONFIG_IPSEC_MB 00:12:01.537 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:01.537 #define SPDK_CONFIG_ISAL 1 00:12:01.537 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:01.537 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:01.537 #define SPDK_CONFIG_LIBDIR 00:12:01.537 #undef SPDK_CONFIG_LTO 00:12:01.537 #define SPDK_CONFIG_MAX_LCORES 128 00:12:01.537 #define SPDK_CONFIG_NVME_CUSE 1 00:12:01.537 #undef SPDK_CONFIG_OCF 00:12:01.537 #define SPDK_CONFIG_OCF_PATH 00:12:01.537 #define SPDK_CONFIG_OPENSSL_PATH 00:12:01.537 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:01.537 #define SPDK_CONFIG_PGO_DIR 00:12:01.537 #undef SPDK_CONFIG_PGO_USE 00:12:01.537 #define SPDK_CONFIG_PREFIX /usr/local 00:12:01.537 #undef SPDK_CONFIG_RAID5F 00:12:01.537 #undef SPDK_CONFIG_RBD 00:12:01.538 #define SPDK_CONFIG_RDMA 1 00:12:01.538 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:01.538 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:01.538 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:01.538 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:01.538 #define SPDK_CONFIG_SHARED 1 00:12:01.538 #undef SPDK_CONFIG_SMA 00:12:01.538 #define SPDK_CONFIG_TESTS 1 00:12:01.538 #undef SPDK_CONFIG_TSAN 00:12:01.538 #define SPDK_CONFIG_UBLK 1 00:12:01.538 #define SPDK_CONFIG_UBSAN 1 00:12:01.538 #undef SPDK_CONFIG_UNIT_TESTS 00:12:01.538 #undef SPDK_CONFIG_URING 00:12:01.538 #define SPDK_CONFIG_URING_PATH 00:12:01.538 #undef SPDK_CONFIG_URING_ZNS 00:12:01.538 #undef SPDK_CONFIG_USDT 00:12:01.538 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:01.538 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:01.538 #define SPDK_CONFIG_VFIO_USER 1 00:12:01.538 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:01.538 #define SPDK_CONFIG_VHOST 1 00:12:01.538 #define SPDK_CONFIG_VIRTIO 1 00:12:01.538 #undef SPDK_CONFIG_VTUNE 00:12:01.538 #define SPDK_CONFIG_VTUNE_DIR 00:12:01.538 #define SPDK_CONFIG_WERROR 1 00:12:01.538 #define SPDK_CONFIG_WPDK_DIR 00:12:01.538 #undef SPDK_CONFIG_XNVME 00:12:01.538 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:01.538 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:01.539 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:12:01.540 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:12:01.800 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:12:01.800 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2412314 ]] 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2412314 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.DcidhE 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DcidhE/tests/target /tmp/spdk.DcidhE 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954339328 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330090496 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=83677106176 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=94501478400 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10824372224 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47188557824 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250739200 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=62181376 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=18877210624 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=18900295680 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=23085056 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47249408000 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250739200 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1331200 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9450143744 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450147840 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:12:01.801 * Looking for test storage... 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=83677106176 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13038964736 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.801 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.802 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:08.384 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:08.384 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:08.384 Found net devices under 0000:af:00.0: cvl_0_0 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:08.384 Found net devices under 0000:af:00.1: cvl_0_1 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.384 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:12:08.384 00:12:08.384 --- 10.0.0.2 ping statistics --- 00:12:08.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.384 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:12:08.385 00:12:08.385 --- 10.0.0.1 ping statistics --- 00:12:08.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.385 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.385 ************************************ 00:12:08.385 START TEST nvmf_filesystem_no_in_capsule 00:12:08.385 ************************************ 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2415471 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2415471 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2415471 ']' 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.385 18:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.385 [2024-07-24 18:48:52.673992] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:12:08.385 [2024-07-24 18:48:52.674045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.385 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.385 [2024-07-24 18:48:52.760783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.385 [2024-07-24 18:48:52.853245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.385 [2024-07-24 18:48:52.853288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.385 [2024-07-24 18:48:52.853298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.385 [2024-07-24 18:48:52.853307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.385 [2024-07-24 18:48:52.853314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.385 [2024-07-24 18:48:52.853364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.385 [2024-07-24 18:48:52.853477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.385 [2024-07-24 18:48:52.853589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.385 [2024-07-24 18:48:52.853589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.643 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.643 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:12:08.643 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.643 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.643 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.643 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.643 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:08.643 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:08.643 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.643 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.902 [2024-07-24 18:48:53.656125] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.902 Malloc1 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.902 [2024-07-24 18:48:53.809870] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:12:08.902 { 00:12:08.902 "name": "Malloc1", 00:12:08.902 "aliases": [ 00:12:08.902 "63cef5b7-6a79-4c0a-9c1f-92f5b4c39ed2" 00:12:08.902 ], 00:12:08.902 "product_name": "Malloc disk", 00:12:08.902 "block_size": 512, 00:12:08.902 "num_blocks": 1048576, 00:12:08.902 "uuid": "63cef5b7-6a79-4c0a-9c1f-92f5b4c39ed2", 00:12:08.902 "assigned_rate_limits": { 00:12:08.902 "rw_ios_per_sec": 0, 00:12:08.902 "rw_mbytes_per_sec": 0, 00:12:08.902 "r_mbytes_per_sec": 0, 00:12:08.902 "w_mbytes_per_sec": 0 00:12:08.902 }, 00:12:08.902 "claimed": true, 00:12:08.902 "claim_type": "exclusive_write", 00:12:08.902 "zoned": false, 00:12:08.902 "supported_io_types": { 00:12:08.902 "read": true, 00:12:08.902 "write": true, 00:12:08.902 "unmap": true, 00:12:08.902 "flush": true, 00:12:08.902 "reset": true, 00:12:08.902 "nvme_admin": false, 00:12:08.902 "nvme_io": false, 00:12:08.902 "nvme_io_md": false, 00:12:08.902 "write_zeroes": true, 00:12:08.902 "zcopy": true, 00:12:08.902 "get_zone_info": false, 00:12:08.902 "zone_management": false, 00:12:08.902 "zone_append": false, 00:12:08.902 "compare": false, 00:12:08.902 "compare_and_write": false, 00:12:08.902 "abort": true, 00:12:08.902 "seek_hole": false, 00:12:08.902 "seek_data": false, 00:12:08.902 "copy": true, 00:12:08.902 "nvme_iov_md": false 00:12:08.902 }, 00:12:08.902 "memory_domains": [ 00:12:08.902 { 00:12:08.902 "dma_device_id": "system", 00:12:08.902 "dma_device_type": 1 00:12:08.902 }, 00:12:08.902 { 00:12:08.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.902 "dma_device_type": 2 00:12:08.902 } 00:12:08.902 ], 00:12:08.902 "driver_specific": {} 00:12:08.902 } 00:12:08.902 ]' 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:12:08.902 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:12:09.161 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:12:09.161 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:12:09.161 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:12:09.161 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:09.161 18:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.539 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.539 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:12:10.539 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.539 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:10.539 18:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:12.444 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:13.382 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.319 ************************************ 00:12:14.319 START TEST filesystem_ext4 00:12:14.319 ************************************ 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:12:14.319 18:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:14.319 mke2fs 1.46.5 (30-Dec-2021) 00:12:14.319 Discarding device blocks: 0/522240 done 00:12:14.578 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:14.578 Filesystem UUID: 7e80cd0d-60d2-4ed5-b383-a897b43f8c70 00:12:14.578 Superblock backups stored on blocks: 00:12:14.578 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:14.578 00:12:14.578 Allocating group tables: 0/64 done 00:12:14.578 Writing inode tables: 0/64 done 00:12:15.514 Creating journal (8192 blocks): done 00:12:16.082 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:12:16.082 00:12:16.082 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:12:16.082 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.018 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.018 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:17.018 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.018 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:17.018 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:17.018 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.018 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2415471 00:12:17.018 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.018 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.018 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.018 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.018 00:12:17.018 real 0m2.824s 00:12:17.018 user 0m0.030s 00:12:17.018 sys 0m0.065s 00:12:17.018 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.018 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:17.018 ************************************ 00:12:17.018 END TEST filesystem_ext4 00:12:17.018 ************************************ 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.286 ************************************ 00:12:17.286 START TEST filesystem_btrfs 00:12:17.286 ************************************ 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:17.286 btrfs-progs v6.6.2 00:12:17.286 See https://btrfs.readthedocs.io for more information. 00:12:17.286 00:12:17.286 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:17.286 NOTE: several default settings have changed in version 5.15, please make sure 00:12:17.286 this does not affect your deployments: 00:12:17.286 - DUP for metadata (-m dup) 00:12:17.286 - enabled no-holes (-O no-holes) 00:12:17.286 - enabled free-space-tree (-R free-space-tree) 00:12:17.286 00:12:17.286 Label: (null) 00:12:17.286 UUID: 3fcc49a7-414b-4450-8d4b-16fe9957ab2c 00:12:17.286 Node size: 16384 00:12:17.286 Sector size: 4096 00:12:17.286 Filesystem size: 510.00MiB 00:12:17.286 Block group profiles: 00:12:17.286 Data: single 8.00MiB 00:12:17.286 Metadata: DUP 32.00MiB 00:12:17.286 System: DUP 8.00MiB 00:12:17.286 SSD detected: yes 00:12:17.286 Zoned device: no 00:12:17.286 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:17.286 Runtime features: free-space-tree 00:12:17.286 Checksum: crc32c 00:12:17.286 Number of devices: 1 00:12:17.286 Devices: 00:12:17.286 ID SIZE PATH 00:12:17.286 1 510.00MiB /dev/nvme0n1p1 00:12:17.286 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:12:17.286 18:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2415471 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:18.247 00:12:18.247 real 0m1.120s 00:12:18.247 user 0m0.019s 00:12:18.247 sys 0m0.136s 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:18.247 ************************************ 00:12:18.247 END TEST filesystem_btrfs 00:12:18.247 ************************************ 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.247 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.521 ************************************ 00:12:18.521 START TEST filesystem_xfs 00:12:18.521 ************************************ 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:12:18.521 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:18.521 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:18.521 = sectsz=512 attr=2, projid32bit=1 00:12:18.521 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:18.521 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:18.521 data = bsize=4096 blocks=130560, imaxpct=25 00:12:18.521 = sunit=0 swidth=0 blks 00:12:18.521 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:18.521 log =internal log bsize=4096 blocks=16384, version=2 00:12:18.521 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:18.521 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:19.461 Discarding blocks...Done. 00:12:19.461 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:12:19.461 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.997 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.997 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:21.997 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.997 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:21.997 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:21.997 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.997 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2415471 00:12:21.997 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.997 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.997 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.998 00:12:21.998 real 0m3.422s 00:12:21.998 user 0m0.023s 00:12:21.998 sys 0m0.074s 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:21.998 ************************************ 00:12:21.998 END TEST filesystem_xfs 00:12:21.998 ************************************ 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2415471 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2415471 ']' 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2415471 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2415471 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2415471' 00:12:21.998 killing process with pid 2415471 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2415471 00:12:21.998 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2415471 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:22.567 00:12:22.567 real 0m14.750s 00:12:22.567 user 0m57.797s 00:12:22.567 sys 0m1.386s 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.567 ************************************ 00:12:22.567 END TEST nvmf_filesystem_no_in_capsule 00:12:22.567 ************************************ 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:22.567 ************************************ 00:12:22.567 START TEST nvmf_filesystem_in_capsule 00:12:22.567 ************************************ 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2418321 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2418321 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2418321 ']' 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.567 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.567 [2024-07-24 18:49:07.486630] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:12:22.567 [2024-07-24 18:49:07.486689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.567 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.826 [2024-07-24 18:49:07.577881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.826 [2024-07-24 18:49:07.663466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.826 [2024-07-24 18:49:07.663510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.826 [2024-07-24 18:49:07.663520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.826 [2024-07-24 18:49:07.663530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.826 [2024-07-24 18:49:07.663537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.826 [2024-07-24 18:49:07.663594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.826 [2024-07-24 18:49:07.663707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.826 [2024-07-24 18:49:07.663742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.826 [2024-07-24 18:49:07.663742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.764 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.764 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:12:23.764 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 [2024-07-24 18:49:08.482114] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 Malloc1 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 [2024-07-24 18:49:08.643814] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:12:23.765 { 00:12:23.765 "name": "Malloc1", 00:12:23.765 "aliases": [ 00:12:23.765 "6cc2df02-8be3-4420-9fc8-3d6d258fafa5" 00:12:23.765 ], 00:12:23.765 "product_name": "Malloc disk", 00:12:23.765 "block_size": 512, 00:12:23.765 "num_blocks": 1048576, 00:12:23.765 "uuid": "6cc2df02-8be3-4420-9fc8-3d6d258fafa5", 00:12:23.765 "assigned_rate_limits": { 00:12:23.765 "rw_ios_per_sec": 0, 00:12:23.765 "rw_mbytes_per_sec": 0, 00:12:23.765 "r_mbytes_per_sec": 0, 00:12:23.765 "w_mbytes_per_sec": 0 00:12:23.765 }, 00:12:23.765 "claimed": true, 00:12:23.765 "claim_type": "exclusive_write", 00:12:23.765 "zoned": false, 00:12:23.765 "supported_io_types": { 00:12:23.765 "read": true, 00:12:23.765 "write": true, 00:12:23.765 "unmap": true, 00:12:23.765 "flush": true, 00:12:23.765 "reset": true, 00:12:23.765 "nvme_admin": false, 00:12:23.765 "nvme_io": false, 00:12:23.765 "nvme_io_md": false, 00:12:23.765 "write_zeroes": true, 00:12:23.765 "zcopy": true, 00:12:23.765 "get_zone_info": false, 00:12:23.765 "zone_management": false, 00:12:23.765 "zone_append": false, 00:12:23.765 "compare": false, 00:12:23.765 "compare_and_write": false, 00:12:23.765 "abort": true, 00:12:23.765 "seek_hole": false, 00:12:23.765 "seek_data": false, 00:12:23.765 "copy": true, 00:12:23.765 "nvme_iov_md": false 00:12:23.765 }, 00:12:23.765 "memory_domains": [ 00:12:23.765 { 00:12:23.765 "dma_device_id": "system", 00:12:23.765 "dma_device_type": 1 00:12:23.765 }, 00:12:23.765 { 00:12:23.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.765 "dma_device_type": 2 00:12:23.765 } 00:12:23.765 ], 00:12:23.765 "driver_specific": {} 00:12:23.765 } 00:12:23.765 ]' 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:23.765 18:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.144 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.144 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:12:25.144 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.144 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:25.144 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:12:27.049 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:27.308 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:27.567 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:27.826 18:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:28.763 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.764 ************************************ 00:12:28.764 START TEST filesystem_in_capsule_ext4 00:12:28.764 ************************************ 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:12:28.764 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:28.764 mke2fs 1.46.5 (30-Dec-2021) 00:12:29.023 Discarding device blocks: 0/522240 done 00:12:29.023 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:29.023 Filesystem UUID: a7089824-f94c-4cc6-a70c-b34e030febf4 00:12:29.023 Superblock backups stored on blocks: 00:12:29.023 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:29.023 00:12:29.023 Allocating group tables: 0/64 done 00:12:29.023 Writing inode tables: 0/64 done 00:12:29.282 Creating journal (8192 blocks): done 00:12:29.282 Writing superblocks and filesystem accounting information: 0/64 done 00:12:29.282 00:12:29.282 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:12:29.282 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:29.282 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:29.282 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:29.282 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:29.282 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:29.282 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:29.282 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2418321 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:29.541 00:12:29.541 real 0m0.554s 00:12:29.541 user 0m0.023s 00:12:29.541 sys 0m0.066s 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:29.541 ************************************ 00:12:29.541 END TEST filesystem_in_capsule_ext4 00:12:29.541 ************************************ 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.541 ************************************ 00:12:29.541 START TEST filesystem_in_capsule_btrfs 00:12:29.541 ************************************ 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:12:29.541 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:29.801 btrfs-progs v6.6.2 00:12:29.801 See https://btrfs.readthedocs.io for more information. 00:12:29.801 00:12:29.801 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:29.801 NOTE: several default settings have changed in version 5.15, please make sure 00:12:29.801 this does not affect your deployments: 00:12:29.801 - DUP for metadata (-m dup) 00:12:29.801 - enabled no-holes (-O no-holes) 00:12:29.801 - enabled free-space-tree (-R free-space-tree) 00:12:29.801 00:12:29.801 Label: (null) 00:12:29.801 UUID: 454abaaf-132c-46a7-a450-ec86e9fcc15d 00:12:29.801 Node size: 16384 00:12:29.801 Sector size: 4096 00:12:29.801 Filesystem size: 510.00MiB 00:12:29.801 Block group profiles: 00:12:29.801 Data: single 8.00MiB 00:12:29.801 Metadata: DUP 32.00MiB 00:12:29.801 System: DUP 8.00MiB 00:12:29.801 SSD detected: yes 00:12:29.801 Zoned device: no 00:12:29.801 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:29.801 Runtime features: free-space-tree 00:12:29.801 Checksum: crc32c 00:12:29.801 Number of devices: 1 00:12:29.801 Devices: 00:12:29.801 ID SIZE PATH 00:12:29.801 1 510.00MiB /dev/nvme0n1p1 00:12:29.801 00:12:29.801 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:12:29.801 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2418321 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:30.370 00:12:30.370 real 0m0.816s 00:12:30.370 user 0m0.026s 00:12:30.370 sys 0m0.129s 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:30.370 ************************************ 00:12:30.370 END TEST filesystem_in_capsule_btrfs 00:12:30.370 ************************************ 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.370 ************************************ 00:12:30.370 START TEST filesystem_in_capsule_xfs 00:12:30.370 ************************************ 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:12:30.370 18:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:30.370 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:30.370 = sectsz=512 attr=2, projid32bit=1 00:12:30.370 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:30.370 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:30.370 data = bsize=4096 blocks=130560, imaxpct=25 00:12:30.370 = sunit=0 swidth=0 blks 00:12:30.370 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:30.370 log =internal log bsize=4096 blocks=16384, version=2 00:12:30.370 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:30.370 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:31.749 Discarding blocks...Done. 00:12:31.749 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:12:31.749 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2418321 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:34.291 00:12:34.291 real 0m3.613s 00:12:34.291 user 0m0.023s 00:12:34.291 sys 0m0.075s 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:34.291 ************************************ 00:12:34.291 END TEST filesystem_in_capsule_xfs 00:12:34.291 ************************************ 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:34.291 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2418321 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2418321 ']' 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2418321 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:12:34.291 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.292 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2418321 00:12:34.292 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:34.292 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:34.292 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2418321' 00:12:34.292 killing process with pid 2418321 00:12:34.292 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2418321 00:12:34.292 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2418321 00:12:34.859 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:34.859 00:12:34.859 real 0m12.141s 00:12:34.859 user 0m47.463s 00:12:34.859 sys 0m1.358s 00:12:34.859 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:34.859 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.859 ************************************ 00:12:34.859 END TEST nvmf_filesystem_in_capsule 00:12:34.859 ************************************ 00:12:34.859 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:34.859 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.859 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:34.859 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.860 rmmod nvme_tcp 00:12:34.860 rmmod nvme_fabrics 00:12:34.860 rmmod nvme_keyring 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.860 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.766 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.766 00:12:36.766 real 0m35.374s 00:12:36.766 user 1m47.172s 00:12:36.766 sys 0m7.296s 00:12:36.766 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.766 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:36.766 ************************************ 00:12:36.766 END TEST nvmf_filesystem 00:12:36.766 ************************************ 00:12:36.766 18:49:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:36.766 18:49:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:36.766 18:49:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.766 18:49:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.026 ************************************ 00:12:37.026 START TEST nvmf_target_discovery 00:12:37.026 ************************************ 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:37.026 * Looking for test storage... 00:12:37.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.026 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.027 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:42.303 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:42.303 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.303 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:42.304 Found net devices under 0000:af:00.0: cvl_0_0 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:42.304 Found net devices under 0000:af:00.1: cvl_0_1 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.304 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:42.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:12:42.563 00:12:42.563 --- 10.0.0.2 ping statistics --- 00:12:42.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.563 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:12:42.563 00:12:42.563 --- 10.0.0.1 ping statistics --- 00:12:42.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.563 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.563 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2424179 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2424179 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2424179 ']' 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.822 18:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.822 [2024-07-24 18:49:27.647089] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:12:42.822 [2024-07-24 18:49:27.647148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.822 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.822 [2024-07-24 18:49:27.734760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.822 [2024-07-24 18:49:27.826171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.822 [2024-07-24 18:49:27.826217] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.822 [2024-07-24 18:49:27.826227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.822 [2024-07-24 18:49:27.826236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.822 [2024-07-24 18:49:27.826244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.822 [2024-07-24 18:49:27.826292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.822 [2024-07-24 18:49:27.826406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.822 [2024-07-24 18:49:27.826517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.822 [2024-07-24 18:49:27.826517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 [2024-07-24 18:49:28.644386] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 Null1 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 [2024-07-24 18:49:28.696721] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 Null2 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 Null3 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.758 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.016 Null4 00:12:44.016 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.016 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:44.016 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.016 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.016 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.017 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:44.017 00:12:44.017 Discovery Log Number of Records 6, Generation counter 6 00:12:44.017 =====Discovery Log Entry 0====== 00:12:44.017 trtype: tcp 00:12:44.017 adrfam: ipv4 00:12:44.017 subtype: current discovery subsystem 00:12:44.017 treq: not required 00:12:44.017 portid: 0 00:12:44.017 trsvcid: 4420 00:12:44.017 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:44.017 traddr: 10.0.0.2 00:12:44.017 eflags: explicit discovery connections, duplicate discovery information 00:12:44.017 sectype: none 00:12:44.017 =====Discovery Log Entry 1====== 00:12:44.017 trtype: tcp 00:12:44.017 adrfam: ipv4 00:12:44.017 subtype: nvme subsystem 00:12:44.017 treq: not required 00:12:44.017 portid: 0 00:12:44.017 trsvcid: 4420 00:12:44.017 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:44.017 traddr: 10.0.0.2 00:12:44.017 eflags: none 00:12:44.017 sectype: none 00:12:44.017 =====Discovery Log Entry 2====== 00:12:44.017 trtype: tcp 00:12:44.017 adrfam: ipv4 00:12:44.017 subtype: nvme subsystem 00:12:44.017 treq: not required 00:12:44.017 portid: 0 00:12:44.017 trsvcid: 4420 00:12:44.017 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:44.017 traddr: 10.0.0.2 00:12:44.017 eflags: none 00:12:44.017 sectype: none 00:12:44.017 =====Discovery Log Entry 3====== 00:12:44.017 trtype: tcp 00:12:44.017 adrfam: ipv4 00:12:44.017 subtype: nvme subsystem 00:12:44.017 treq: not required 00:12:44.017 portid: 0 00:12:44.017 trsvcid: 4420 00:12:44.017 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:44.017 traddr: 10.0.0.2 00:12:44.017 eflags: none 00:12:44.017 sectype: none 00:12:44.017 =====Discovery Log Entry 4====== 00:12:44.017 trtype: tcp 00:12:44.017 adrfam: ipv4 00:12:44.017 subtype: nvme subsystem 00:12:44.017 treq: not required 00:12:44.017 portid: 0 00:12:44.017 trsvcid: 4420 00:12:44.017 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:44.017 traddr: 10.0.0.2 00:12:44.017 eflags: none 00:12:44.017 sectype: none 00:12:44.017 =====Discovery Log Entry 5====== 00:12:44.017 trtype: tcp 00:12:44.017 adrfam: ipv4 00:12:44.017 subtype: discovery subsystem referral 00:12:44.017 treq: not required 00:12:44.017 portid: 0 00:12:44.017 trsvcid: 4430 00:12:44.017 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:44.017 traddr: 10.0.0.2 00:12:44.017 eflags: none 00:12:44.017 sectype: none 00:12:44.017 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:44.017 Perform nvmf subsystem discovery via RPC 00:12:44.017 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:44.017 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.017 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.276 [ 00:12:44.276 { 00:12:44.276 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:44.276 "subtype": "Discovery", 00:12:44.276 "listen_addresses": [ 00:12:44.276 { 00:12:44.276 "trtype": "TCP", 00:12:44.276 "adrfam": "IPv4", 00:12:44.276 "traddr": "10.0.0.2", 00:12:44.276 "trsvcid": "4420" 00:12:44.276 } 00:12:44.276 ], 00:12:44.276 "allow_any_host": true, 00:12:44.276 "hosts": [] 00:12:44.276 }, 00:12:44.276 { 00:12:44.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.276 "subtype": "NVMe", 00:12:44.276 "listen_addresses": [ 00:12:44.276 { 00:12:44.276 "trtype": "TCP", 00:12:44.276 "adrfam": "IPv4", 00:12:44.276 "traddr": "10.0.0.2", 00:12:44.276 "trsvcid": "4420" 00:12:44.276 } 00:12:44.276 ], 00:12:44.276 "allow_any_host": true, 00:12:44.276 "hosts": [], 00:12:44.276 "serial_number": "SPDK00000000000001", 00:12:44.276 "model_number": "SPDK bdev Controller", 00:12:44.276 "max_namespaces": 32, 00:12:44.276 "min_cntlid": 1, 00:12:44.276 "max_cntlid": 65519, 00:12:44.276 "namespaces": [ 00:12:44.276 { 00:12:44.276 "nsid": 1, 00:12:44.276 "bdev_name": "Null1", 00:12:44.276 "name": "Null1", 00:12:44.276 "nguid": "C4D09288F9FB4DB8A650C8833DA1E6D0", 00:12:44.276 "uuid": "c4d09288-f9fb-4db8-a650-c8833da1e6d0" 00:12:44.276 } 00:12:44.276 ] 00:12:44.276 }, 00:12:44.276 { 00:12:44.276 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:44.276 "subtype": "NVMe", 00:12:44.276 "listen_addresses": [ 00:12:44.276 { 00:12:44.276 "trtype": "TCP", 00:12:44.276 "adrfam": "IPv4", 00:12:44.276 "traddr": "10.0.0.2", 00:12:44.276 "trsvcid": "4420" 00:12:44.276 } 00:12:44.276 ], 00:12:44.276 "allow_any_host": true, 00:12:44.276 "hosts": [], 00:12:44.276 "serial_number": "SPDK00000000000002", 00:12:44.276 "model_number": "SPDK bdev Controller", 00:12:44.276 "max_namespaces": 32, 00:12:44.276 "min_cntlid": 1, 00:12:44.276 "max_cntlid": 65519, 00:12:44.276 "namespaces": [ 00:12:44.276 { 00:12:44.276 "nsid": 1, 00:12:44.276 "bdev_name": "Null2", 00:12:44.276 "name": "Null2", 00:12:44.276 "nguid": "4D2D8804EB3246D2A8E1C5A3767A7AB2", 00:12:44.276 "uuid": "4d2d8804-eb32-46d2-a8e1-c5a3767a7ab2" 00:12:44.276 } 00:12:44.276 ] 00:12:44.276 }, 00:12:44.276 { 00:12:44.276 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:44.276 "subtype": "NVMe", 00:12:44.276 "listen_addresses": [ 00:12:44.276 { 00:12:44.276 "trtype": "TCP", 00:12:44.276 "adrfam": "IPv4", 00:12:44.276 "traddr": "10.0.0.2", 00:12:44.276 "trsvcid": "4420" 00:12:44.276 } 00:12:44.276 ], 00:12:44.276 "allow_any_host": true, 00:12:44.276 "hosts": [], 00:12:44.276 "serial_number": "SPDK00000000000003", 00:12:44.276 "model_number": "SPDK bdev Controller", 00:12:44.276 "max_namespaces": 32, 00:12:44.276 "min_cntlid": 1, 00:12:44.276 "max_cntlid": 65519, 00:12:44.276 "namespaces": [ 00:12:44.276 { 00:12:44.276 "nsid": 1, 00:12:44.276 "bdev_name": "Null3", 00:12:44.276 "name": "Null3", 00:12:44.276 "nguid": "ED33566D51824603B4E0647FD7658DD9", 00:12:44.276 "uuid": "ed33566d-5182-4603-b4e0-647fd7658dd9" 00:12:44.276 } 00:12:44.276 ] 00:12:44.276 }, 00:12:44.276 { 00:12:44.276 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:44.276 "subtype": "NVMe", 00:12:44.276 "listen_addresses": [ 00:12:44.276 { 00:12:44.276 "trtype": "TCP", 00:12:44.276 "adrfam": "IPv4", 00:12:44.276 "traddr": "10.0.0.2", 00:12:44.276 "trsvcid": "4420" 00:12:44.276 } 00:12:44.276 ], 00:12:44.276 "allow_any_host": true, 00:12:44.276 "hosts": [], 00:12:44.276 "serial_number": "SPDK00000000000004", 00:12:44.276 "model_number": "SPDK bdev Controller", 00:12:44.276 "max_namespaces": 32, 00:12:44.277 "min_cntlid": 1, 00:12:44.277 "max_cntlid": 65519, 00:12:44.277 "namespaces": [ 00:12:44.277 { 00:12:44.277 "nsid": 1, 00:12:44.277 "bdev_name": "Null4", 00:12:44.277 "name": "Null4", 00:12:44.277 "nguid": "F2AC7AC838B74844A10C844A7E55E6E4", 00:12:44.277 "uuid": "f2ac7ac8-38b7-4844-a10c-844a7e55e6e4" 00:12:44.277 } 00:12:44.277 ] 00:12:44.277 } 00:12:44.277 ] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.277 rmmod nvme_tcp 00:12:44.277 rmmod nvme_fabrics 00:12:44.277 rmmod nvme_keyring 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2424179 ']' 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2424179 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2424179 ']' 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2424179 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:44.277 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2424179 00:12:44.536 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:44.536 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:44.536 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2424179' 00:12:44.536 killing process with pid 2424179 00:12:44.536 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2424179 00:12:44.536 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2424179 00:12:44.536 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.537 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.537 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.537 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.537 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.537 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.537 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.537 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:47.073 00:12:47.073 real 0m9.766s 00:12:47.073 user 0m8.392s 00:12:47.073 sys 0m4.725s 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.073 ************************************ 00:12:47.073 END TEST nvmf_target_discovery 00:12:47.073 ************************************ 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.073 ************************************ 00:12:47.073 START TEST nvmf_referrals 00:12:47.073 ************************************ 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:47.073 * Looking for test storage... 00:12:47.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.073 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:47.074 18:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.384 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.384 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:52.385 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:52.385 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:52.385 Found net devices under 0000:af:00.0: cvl_0_0 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:52.385 Found net devices under 0000:af:00.1: cvl_0_1 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.385 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:52.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:12:52.645 00:12:52.645 --- 10.0.0.2 ping statistics --- 00:12:52.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.645 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:12:52.645 00:12:52.645 --- 10.0.0.1 ping statistics --- 00:12:52.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.645 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2428098 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2428098 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2428098 ']' 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.645 18:49:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.904 [2024-07-24 18:49:37.684685] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:12:52.905 [2024-07-24 18:49:37.684754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.905 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.905 [2024-07-24 18:49:37.771533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.905 [2024-07-24 18:49:37.864285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.905 [2024-07-24 18:49:37.864323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.905 [2024-07-24 18:49:37.864333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.905 [2024-07-24 18:49:37.864342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.905 [2024-07-24 18:49:37.864349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.905 [2024-07-24 18:49:37.864399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.905 [2024-07-24 18:49:37.864513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.905 [2024-07-24 18:49:37.864640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.905 [2024-07-24 18:49:37.864642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.842 [2024-07-24 18:49:38.678391] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.842 [2024-07-24 18:49:38.698599] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.842 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.101 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:54.102 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:54.102 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:54.102 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.102 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.102 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.102 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.102 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.361 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.620 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:54.879 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.137 18:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:55.137 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:55.137 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:55.137 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:55.137 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:55.137 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.137 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:55.137 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:55.137 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:55.137 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.137 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.396 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.396 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.396 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:55.396 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.396 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.396 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.396 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:55.396 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:55.396 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.397 rmmod nvme_tcp 00:12:55.397 rmmod nvme_fabrics 00:12:55.397 rmmod nvme_keyring 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2428098 ']' 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2428098 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2428098 ']' 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2428098 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:55.397 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2428098 00:12:55.655 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:55.655 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2428098' 00:12:55.656 killing process with pid 2428098 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2428098 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2428098 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.656 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:58.191 00:12:58.191 real 0m11.080s 00:12:58.191 user 0m13.688s 00:12:58.191 sys 0m5.122s 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.191 ************************************ 00:12:58.191 END TEST nvmf_referrals 00:12:58.191 ************************************ 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.191 ************************************ 00:12:58.191 START TEST nvmf_connect_disconnect 00:12:58.191 ************************************ 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:58.191 * Looking for test storage... 00:12:58.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.191 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:58.192 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:58.192 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:58.192 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.192 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.192 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.192 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:58.192 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:58.192 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:58.192 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:03.479 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:03.479 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.479 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:03.480 Found net devices under 0000:af:00.0: cvl_0_0 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:03.480 Found net devices under 0000:af:00.1: cvl_0_1 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.480 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:03.739 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:03.739 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.739 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.739 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.739 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.739 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.739 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.739 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.739 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:13:03.999 00:13:03.999 --- 10.0.0.2 ping statistics --- 00:13:03.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.999 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:13:03.999 00:13:03.999 --- 10.0.0.1 ping statistics --- 00:13:03.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.999 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2432424 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2432424 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2432424 ']' 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.999 18:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.999 [2024-07-24 18:49:48.866022] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:13:03.999 [2024-07-24 18:49:48.866084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.999 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.999 [2024-07-24 18:49:48.953415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.258 [2024-07-24 18:49:49.045181] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.258 [2024-07-24 18:49:49.045226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.258 [2024-07-24 18:49:49.045236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.258 [2024-07-24 18:49:49.045245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.258 [2024-07-24 18:49:49.045252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.258 [2024-07-24 18:49:49.045303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.258 [2024-07-24 18:49:49.045415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.258 [2024-07-24 18:49:49.045525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.258 [2024-07-24 18:49:49.045526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.825 [2024-07-24 18:49:49.787927] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.825 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.084 [2024-07-24 18:49:49.847680] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:05.084 18:49:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:08.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.436 rmmod nvme_tcp 00:13:22.436 rmmod nvme_fabrics 00:13:22.436 rmmod nvme_keyring 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2432424 ']' 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2432424 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2432424 ']' 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2432424 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2432424 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2432424' 00:13:22.436 killing process with pid 2432424 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2432424 00:13:22.436 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2432424 00:13:22.695 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.695 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.695 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.695 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.695 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.695 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.695 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.695 18:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.599 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.599 00:13:24.599 real 0m26.730s 00:13:24.599 user 1m14.578s 00:13:24.599 sys 0m5.874s 00:13:24.599 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.599 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.599 ************************************ 00:13:24.599 END TEST nvmf_connect_disconnect 00:13:24.599 ************************************ 00:13:24.599 18:50:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:24.599 18:50:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:24.599 18:50:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.599 18:50:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.599 ************************************ 00:13:24.599 START TEST nvmf_multitarget 00:13:24.599 ************************************ 00:13:24.599 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:24.875 * Looking for test storage... 00:13:24.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.875 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.876 18:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:31.442 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:31.442 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:31.442 Found net devices under 0000:af:00.0: cvl_0_0 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:31.442 Found net devices under 0000:af:00.1: cvl_0_1 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.442 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:31.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:13:31.443 00:13:31.443 --- 10.0.0.2 ping statistics --- 00:13:31.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.443 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:13:31.443 00:13:31.443 --- 10.0.0.1 ping statistics --- 00:13:31.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.443 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2439401 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2439401 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2439401 ']' 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.443 18:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:31.443 [2024-07-24 18:50:15.755853] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:13:31.443 [2024-07-24 18:50:15.755909] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.443 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.443 [2024-07-24 18:50:15.842152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.443 [2024-07-24 18:50:15.933451] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.443 [2024-07-24 18:50:15.933494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.443 [2024-07-24 18:50:15.933509] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.443 [2024-07-24 18:50:15.933518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.443 [2024-07-24 18:50:15.933525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.443 [2024-07-24 18:50:15.933575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.443 [2024-07-24 18:50:15.933631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.443 [2024-07-24 18:50:15.933696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.443 [2024-07-24 18:50:15.933696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.702 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.702 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:13:31.702 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.702 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:31.702 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:31.702 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.702 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:31.702 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:31.702 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:31.960 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:31.960 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:31.960 "nvmf_tgt_1" 00:13:31.960 18:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:32.218 "nvmf_tgt_2" 00:13:32.218 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:32.218 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:32.218 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:32.218 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:32.517 true 00:13:32.517 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:32.517 true 00:13:32.517 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:32.517 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.776 rmmod nvme_tcp 00:13:32.776 rmmod nvme_fabrics 00:13:32.776 rmmod nvme_keyring 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2439401 ']' 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2439401 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2439401 ']' 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2439401 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2439401 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2439401' 00:13:32.776 killing process with pid 2439401 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2439401 00:13:32.776 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2439401 00:13:33.034 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.034 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:33.034 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:33.034 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.034 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:33.034 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.034 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.034 18:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.571 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:35.571 00:13:35.571 real 0m10.396s 00:13:35.571 user 0m10.574s 00:13:35.571 sys 0m4.988s 00:13:35.571 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.571 18:50:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:35.571 ************************************ 00:13:35.571 END TEST nvmf_multitarget 00:13:35.571 ************************************ 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.571 ************************************ 00:13:35.571 START TEST nvmf_rpc 00:13:35.571 ************************************ 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:35.571 * Looking for test storage... 00:13:35.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.571 18:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.844 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:40.845 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:40.845 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:40.845 Found net devices under 0000:af:00.0: cvl_0_0 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:40.845 Found net devices under 0000:af:00.1: cvl_0_1 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.845 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.104 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.105 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.105 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:41.105 18:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:41.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:13:41.105 00:13:41.105 --- 10.0.0.2 ping statistics --- 00:13:41.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.105 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:13:41.105 00:13:41.105 --- 10.0.0.1 ping statistics --- 00:13:41.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.105 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:41.105 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2443412 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2443412 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2443412 ']' 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.364 18:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.364 [2024-07-24 18:50:26.197917] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:13:41.364 [2024-07-24 18:50:26.197975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.364 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.364 [2024-07-24 18:50:26.286130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.623 [2024-07-24 18:50:26.375997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.623 [2024-07-24 18:50:26.376039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.623 [2024-07-24 18:50:26.376050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.623 [2024-07-24 18:50:26.376059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.623 [2024-07-24 18:50:26.376067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.623 [2024-07-24 18:50:26.376116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.623 [2024-07-24 18:50:26.376148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.623 [2024-07-24 18:50:26.376260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.623 [2024-07-24 18:50:26.376260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:42.189 "tick_rate": 2200000000, 00:13:42.189 "poll_groups": [ 00:13:42.189 { 00:13:42.189 "name": "nvmf_tgt_poll_group_000", 00:13:42.189 "admin_qpairs": 0, 00:13:42.189 "io_qpairs": 0, 00:13:42.189 "current_admin_qpairs": 0, 00:13:42.189 "current_io_qpairs": 0, 00:13:42.189 "pending_bdev_io": 0, 00:13:42.189 "completed_nvme_io": 0, 00:13:42.189 "transports": [] 00:13:42.189 }, 00:13:42.189 { 00:13:42.189 "name": "nvmf_tgt_poll_group_001", 00:13:42.189 "admin_qpairs": 0, 00:13:42.189 "io_qpairs": 0, 00:13:42.189 "current_admin_qpairs": 0, 00:13:42.189 "current_io_qpairs": 0, 00:13:42.189 "pending_bdev_io": 0, 00:13:42.189 "completed_nvme_io": 0, 00:13:42.189 "transports": [] 00:13:42.189 }, 00:13:42.189 { 00:13:42.189 "name": "nvmf_tgt_poll_group_002", 00:13:42.189 "admin_qpairs": 0, 00:13:42.189 "io_qpairs": 0, 00:13:42.189 "current_admin_qpairs": 0, 00:13:42.189 "current_io_qpairs": 0, 00:13:42.189 "pending_bdev_io": 0, 00:13:42.189 "completed_nvme_io": 0, 00:13:42.189 "transports": [] 00:13:42.189 }, 00:13:42.189 { 00:13:42.189 "name": "nvmf_tgt_poll_group_003", 00:13:42.189 "admin_qpairs": 0, 00:13:42.189 "io_qpairs": 0, 00:13:42.189 "current_admin_qpairs": 0, 00:13:42.189 "current_io_qpairs": 0, 00:13:42.189 "pending_bdev_io": 0, 00:13:42.189 "completed_nvme_io": 0, 00:13:42.189 "transports": [] 00:13:42.189 } 00:13:42.189 ] 00:13:42.189 }' 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:42.189 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.448 [2024-07-24 18:50:27.219748] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:42.448 "tick_rate": 2200000000, 00:13:42.448 "poll_groups": [ 00:13:42.448 { 00:13:42.448 "name": "nvmf_tgt_poll_group_000", 00:13:42.448 "admin_qpairs": 0, 00:13:42.448 "io_qpairs": 0, 00:13:42.448 "current_admin_qpairs": 0, 00:13:42.448 "current_io_qpairs": 0, 00:13:42.448 "pending_bdev_io": 0, 00:13:42.448 "completed_nvme_io": 0, 00:13:42.448 "transports": [ 00:13:42.448 { 00:13:42.448 "trtype": "TCP" 00:13:42.448 } 00:13:42.448 ] 00:13:42.448 }, 00:13:42.448 { 00:13:42.448 "name": "nvmf_tgt_poll_group_001", 00:13:42.448 "admin_qpairs": 0, 00:13:42.448 "io_qpairs": 0, 00:13:42.448 "current_admin_qpairs": 0, 00:13:42.448 "current_io_qpairs": 0, 00:13:42.448 "pending_bdev_io": 0, 00:13:42.448 "completed_nvme_io": 0, 00:13:42.448 "transports": [ 00:13:42.448 { 00:13:42.448 "trtype": "TCP" 00:13:42.448 } 00:13:42.448 ] 00:13:42.448 }, 00:13:42.448 { 00:13:42.448 "name": "nvmf_tgt_poll_group_002", 00:13:42.448 "admin_qpairs": 0, 00:13:42.448 "io_qpairs": 0, 00:13:42.448 "current_admin_qpairs": 0, 00:13:42.448 "current_io_qpairs": 0, 00:13:42.448 "pending_bdev_io": 0, 00:13:42.448 "completed_nvme_io": 0, 00:13:42.448 "transports": [ 00:13:42.448 { 00:13:42.448 "trtype": "TCP" 00:13:42.448 } 00:13:42.448 ] 00:13:42.448 }, 00:13:42.448 { 00:13:42.448 "name": "nvmf_tgt_poll_group_003", 00:13:42.448 "admin_qpairs": 0, 00:13:42.448 "io_qpairs": 0, 00:13:42.448 "current_admin_qpairs": 0, 00:13:42.448 "current_io_qpairs": 0, 00:13:42.448 "pending_bdev_io": 0, 00:13:42.448 "completed_nvme_io": 0, 00:13:42.448 "transports": [ 00:13:42.448 { 00:13:42.448 "trtype": "TCP" 00:13:42.448 } 00:13:42.448 ] 00:13:42.448 } 00:13:42.448 ] 00:13:42.448 }' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.448 Malloc1 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.448 [2024-07-24 18:50:27.408065] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:42.448 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:42.448 [2024-07-24 18:50:27.432682] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:13:42.448 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:42.448 could not add new controller: failed to write to nvme-fabrics device 00:13:42.707 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:42.707 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:42.707 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:42.707 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:42.707 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:42.707 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.707 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.707 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.707 18:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:44.082 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:44.082 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:44.082 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.082 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:44.082 18:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:45.983 18:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.983 [2024-07-24 18:50:30.987129] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:13:46.242 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:46.242 could not add new controller: failed to write to nvme-fabrics device 00:13:46.242 18:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:46.242 18:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:46.242 18:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:46.242 18:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:46.242 18:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:46.242 18:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.242 18:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.242 18:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.242 18:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:47.617 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:47.617 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:47.617 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.617 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:47.617 18:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:49.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.524 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.525 [2024-07-24 18:50:34.504450] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.525 18:50:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:50.939 18:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:50.939 18:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:50.939 18:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:50.939 18:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:50.939 18:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:53.472 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:53.472 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:53.472 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.472 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:53.472 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.472 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:53.472 18:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:53.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.472 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.473 [2024-07-24 18:50:38.089863] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.473 18:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:54.849 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.849 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:54.849 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.849 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:54.849 18:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.755 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.756 [2024-07-24 18:50:41.627121] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.756 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.756 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:56.756 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.756 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.756 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.756 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:56.756 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.756 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.756 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.756 18:50:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:58.131 18:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:58.131 18:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:13:58.131 18:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:13:58.131 18:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:13:58.131 18:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:14:00.034 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:14:00.034 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:14:00.034 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:14:00.034 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:14:00.034 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:14:00.034 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:14:00.034 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.293 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 [2024-07-24 18:50:45.204535] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.294 18:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.875 18:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:01.875 18:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:14:01.875 18:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.875 18:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:14:01.875 18:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:03.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:03.780 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.781 [2024-07-24 18:50:48.692970] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.781 18:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:05.167 18:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:05.167 18:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:14:05.167 18:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.167 18:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:14:05.167 18:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:14:07.695 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:14:07.695 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:14:07.695 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.695 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:14:07.695 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.695 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:14:07.695 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 [2024-07-24 18:50:52.241703] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 [2024-07-24 18:50:52.289883] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 [2024-07-24 18:50:52.342054] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.696 [2024-07-24 18:50:52.390196] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.696 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 [2024-07-24 18:50:52.438436] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:07.697 "tick_rate": 2200000000, 00:14:07.697 "poll_groups": [ 00:14:07.697 { 00:14:07.697 "name": "nvmf_tgt_poll_group_000", 00:14:07.697 "admin_qpairs": 2, 00:14:07.697 "io_qpairs": 196, 00:14:07.697 "current_admin_qpairs": 0, 00:14:07.697 "current_io_qpairs": 0, 00:14:07.697 "pending_bdev_io": 0, 00:14:07.697 "completed_nvme_io": 298, 00:14:07.697 "transports": [ 00:14:07.697 { 00:14:07.697 "trtype": "TCP" 00:14:07.697 } 00:14:07.697 ] 00:14:07.697 }, 00:14:07.697 { 00:14:07.697 "name": "nvmf_tgt_poll_group_001", 00:14:07.697 "admin_qpairs": 2, 00:14:07.697 "io_qpairs": 196, 00:14:07.697 "current_admin_qpairs": 0, 00:14:07.697 "current_io_qpairs": 0, 00:14:07.697 "pending_bdev_io": 0, 00:14:07.697 "completed_nvme_io": 245, 00:14:07.697 "transports": [ 00:14:07.697 { 00:14:07.697 "trtype": "TCP" 00:14:07.697 } 00:14:07.697 ] 00:14:07.697 }, 00:14:07.697 { 00:14:07.697 "name": "nvmf_tgt_poll_group_002", 00:14:07.697 "admin_qpairs": 1, 00:14:07.697 "io_qpairs": 196, 00:14:07.697 "current_admin_qpairs": 0, 00:14:07.697 "current_io_qpairs": 0, 00:14:07.697 "pending_bdev_io": 0, 00:14:07.697 "completed_nvme_io": 295, 00:14:07.697 "transports": [ 00:14:07.697 { 00:14:07.697 "trtype": "TCP" 00:14:07.697 } 00:14:07.697 ] 00:14:07.697 }, 00:14:07.697 { 00:14:07.697 "name": "nvmf_tgt_poll_group_003", 00:14:07.697 "admin_qpairs": 2, 00:14:07.697 "io_qpairs": 196, 00:14:07.697 "current_admin_qpairs": 0, 00:14:07.697 "current_io_qpairs": 0, 00:14:07.697 "pending_bdev_io": 0, 00:14:07.697 "completed_nvme_io": 296, 00:14:07.697 "transports": [ 00:14:07.697 { 00:14:07.697 "trtype": "TCP" 00:14:07.697 } 00:14:07.697 ] 00:14:07.697 } 00:14:07.697 ] 00:14:07.697 }' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.697 rmmod nvme_tcp 00:14:07.697 rmmod nvme_fabrics 00:14:07.697 rmmod nvme_keyring 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2443412 ']' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2443412 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2443412 ']' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2443412 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2443412 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2443412' 00:14:07.697 killing process with pid 2443412 00:14:07.697 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2443412 00:14:07.698 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2443412 00:14:07.956 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.956 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.956 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.956 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.956 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.956 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.956 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.956 18:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.491 00:14:10.491 real 0m34.943s 00:14:10.491 user 1m47.246s 00:14:10.491 sys 0m6.560s 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.491 ************************************ 00:14:10.491 END TEST nvmf_rpc 00:14:10.491 ************************************ 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.491 ************************************ 00:14:10.491 START TEST nvmf_invalid 00:14:10.491 ************************************ 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:10.491 * Looking for test storage... 00:14:10.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.491 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.492 18:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.060 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:17.061 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:17.061 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:17.061 Found net devices under 0000:af:00.0: cvl_0_0 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:17.061 Found net devices under 0000:af:00.1: cvl_0_1 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.061 18:51:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:14:17.061 00:14:17.061 --- 10.0.0.2 ping statistics --- 00:14:17.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.061 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:14:17.061 00:14:17.061 --- 10.0.0.1 ping statistics --- 00:14:17.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.061 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2451816 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2451816 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2451816 ']' 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.061 18:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:17.061 [2024-07-24 18:51:01.180674] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:14:17.061 [2024-07-24 18:51:01.180730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.061 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.061 [2024-07-24 18:51:01.268154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.062 [2024-07-24 18:51:01.359435] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.062 [2024-07-24 18:51:01.359480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.062 [2024-07-24 18:51:01.359491] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.062 [2024-07-24 18:51:01.359500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.062 [2024-07-24 18:51:01.359508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.062 [2024-07-24 18:51:01.359557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.062 [2024-07-24 18:51:01.359671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.062 [2024-07-24 18:51:01.359710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.062 [2024-07-24 18:51:01.359709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.062 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.062 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:14:17.062 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.062 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.062 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:17.320 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.320 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:17.320 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21715 00:14:17.320 [2024-07-24 18:51:02.250122] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:17.320 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:17.320 { 00:14:17.320 "nqn": "nqn.2016-06.io.spdk:cnode21715", 00:14:17.320 "tgt_name": "foobar", 00:14:17.320 "method": "nvmf_create_subsystem", 00:14:17.320 "req_id": 1 00:14:17.320 } 00:14:17.320 Got JSON-RPC error response 00:14:17.320 response: 00:14:17.320 { 00:14:17.320 "code": -32603, 00:14:17.320 "message": "Unable to find target foobar" 00:14:17.320 }' 00:14:17.320 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:17.320 { 00:14:17.320 "nqn": "nqn.2016-06.io.spdk:cnode21715", 00:14:17.320 "tgt_name": "foobar", 00:14:17.320 "method": "nvmf_create_subsystem", 00:14:17.320 "req_id": 1 00:14:17.320 } 00:14:17.320 Got JSON-RPC error response 00:14:17.320 response: 00:14:17.320 { 00:14:17.320 "code": -32603, 00:14:17.320 "message": "Unable to find target foobar" 00:14:17.320 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:17.320 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:17.320 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9340 00:14:17.578 [2024-07-24 18:51:02.519181] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9340: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:17.578 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:17.578 { 00:14:17.578 "nqn": "nqn.2016-06.io.spdk:cnode9340", 00:14:17.578 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:17.578 "method": "nvmf_create_subsystem", 00:14:17.578 "req_id": 1 00:14:17.578 } 00:14:17.578 Got JSON-RPC error response 00:14:17.578 response: 00:14:17.578 { 00:14:17.578 "code": -32602, 00:14:17.578 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:17.578 }' 00:14:17.578 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:17.578 { 00:14:17.578 "nqn": "nqn.2016-06.io.spdk:cnode9340", 00:14:17.578 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:17.578 "method": "nvmf_create_subsystem", 00:14:17.578 "req_id": 1 00:14:17.578 } 00:14:17.578 Got JSON-RPC error response 00:14:17.578 response: 00:14:17.578 { 00:14:17.578 "code": -32602, 00:14:17.578 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:17.578 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:17.578 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:17.578 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode645 00:14:17.837 [2024-07-24 18:51:02.784162] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode645: invalid model number 'SPDK_Controller' 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:17.837 { 00:14:17.837 "nqn": "nqn.2016-06.io.spdk:cnode645", 00:14:17.837 "model_number": "SPDK_Controller\u001f", 00:14:17.837 "method": "nvmf_create_subsystem", 00:14:17.837 "req_id": 1 00:14:17.837 } 00:14:17.837 Got JSON-RPC error response 00:14:17.837 response: 00:14:17.837 { 00:14:17.837 "code": -32602, 00:14:17.837 "message": "Invalid MN SPDK_Controller\u001f" 00:14:17.837 }' 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:17.837 { 00:14:17.837 "nqn": "nqn.2016-06.io.spdk:cnode645", 00:14:17.837 "model_number": "SPDK_Controller\u001f", 00:14:17.837 "method": "nvmf_create_subsystem", 00:14:17.837 "req_id": 1 00:14:17.837 } 00:14:17.837 Got JSON-RPC error response 00:14:17.837 response: 00:14:17.837 { 00:14:17.837 "code": -32602, 00:14:17.837 "message": "Invalid MN SPDK_Controller\u001f" 00:14:17.837 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.837 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '10Iv|[eN\T9mg8$[;uh-6' 00:14:18.097 18:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '10Iv|[eN\T9mg8$[;uh-6' nqn.2016-06.io.spdk:cnode24208 00:14:18.357 [2024-07-24 18:51:03.185682] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24208: invalid serial number '10Iv|[eN\T9mg8$[;uh-6' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:18.357 { 00:14:18.357 "nqn": "nqn.2016-06.io.spdk:cnode24208", 00:14:18.357 "serial_number": "10Iv|[eN\\T9mg8$[;uh-6", 00:14:18.357 "method": "nvmf_create_subsystem", 00:14:18.357 "req_id": 1 00:14:18.357 } 00:14:18.357 Got JSON-RPC error response 00:14:18.357 response: 00:14:18.357 { 00:14:18.357 "code": -32602, 00:14:18.357 "message": "Invalid SN 10Iv|[eN\\T9mg8$[;uh-6" 00:14:18.357 }' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:18.357 { 00:14:18.357 "nqn": "nqn.2016-06.io.spdk:cnode24208", 00:14:18.357 "serial_number": "10Iv|[eN\\T9mg8$[;uh-6", 00:14:18.357 "method": "nvmf_create_subsystem", 00:14:18.357 "req_id": 1 00:14:18.357 } 00:14:18.357 Got JSON-RPC error response 00:14:18.357 response: 00:14:18.357 { 00:14:18.357 "code": -32602, 00:14:18.357 "message": "Invalid SN 10Iv|[eN\\T9mg8$[;uh-6" 00:14:18.357 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:18.357 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.358 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:14:18.617 18:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z*0f;|j6%JR8tR4uc*QE=e`Cl'\''=< /dev/null' 00:14:21.461 18:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.997 00:14:23.997 real 0m13.351s 00:14:23.997 user 0m24.705s 00:14:23.997 sys 0m5.558s 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:23.997 ************************************ 00:14:23.997 END TEST nvmf_invalid 00:14:23.997 ************************************ 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:23.997 ************************************ 00:14:23.997 START TEST nvmf_connect_stress 00:14:23.997 ************************************ 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:23.997 * Looking for test storage... 00:14:23.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.997 18:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:29.271 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.271 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:29.272 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:29.272 Found net devices under 0000:af:00.0: cvl_0_0 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:29.272 Found net devices under 0000:af:00.1: cvl_0_1 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.272 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.530 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.530 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:14:29.531 00:14:29.531 --- 10.0.0.2 ping statistics --- 00:14:29.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.531 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:14:29.531 00:14:29.531 --- 10.0.0.1 ping statistics --- 00:14:29.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.531 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.531 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2456953 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2456953 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2456953 ']' 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.789 18:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.789 [2024-07-24 18:51:14.620783] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:14:29.789 [2024-07-24 18:51:14.620837] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.789 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.789 [2024-07-24 18:51:14.706179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:30.048 [2024-07-24 18:51:14.811203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.048 [2024-07-24 18:51:14.811256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.048 [2024-07-24 18:51:14.811269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.048 [2024-07-24 18:51:14.811280] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.048 [2024-07-24 18:51:14.811290] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.048 [2024-07-24 18:51:14.811415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.048 [2024-07-24 18:51:14.811536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.048 [2024-07-24 18:51:14.811538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.616 [2024-07-24 18:51:15.541151] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.616 [2024-07-24 18:51:15.582586] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.616 NULL1 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2457230 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.616 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.875 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.875 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.875 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.876 18:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.134 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.134 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:31.134 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.134 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.134 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.392 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.392 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:31.392 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.392 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.392 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.961 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.961 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:31.961 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.961 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.961 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.286 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.286 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:32.286 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.286 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.286 18:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.559 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.559 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:32.559 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.559 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.559 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.817 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.817 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:32.817 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.817 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.817 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.076 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.076 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:33.076 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.076 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.076 18:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.334 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.334 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:33.334 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.334 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.334 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.901 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.901 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:33.901 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.901 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.901 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.159 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.159 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:34.159 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.159 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.159 18:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.417 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.417 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:34.417 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.417 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.417 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.676 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.676 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:34.676 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.676 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.676 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.934 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.934 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:34.934 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.934 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.934 18:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.501 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.501 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:35.501 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.501 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.501 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.759 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.759 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:35.759 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.759 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.759 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.018 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.018 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:36.018 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.018 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.018 18:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.277 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.277 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:36.277 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.277 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.277 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.536 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.536 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:36.536 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.536 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.536 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.104 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.104 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:37.104 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.104 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.104 18:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.363 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.363 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:37.363 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.363 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.363 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.622 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.622 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:37.622 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.622 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.622 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.881 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.881 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:37.881 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.881 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.881 18:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.449 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.449 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:38.449 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.449 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.449 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.707 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.707 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:38.707 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.707 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.707 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.964 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.964 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:38.964 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.964 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.964 18:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.223 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.223 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:39.223 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.223 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.223 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.481 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.481 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:39.481 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.481 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.481 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.049 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.049 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:40.049 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.049 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.049 18:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.307 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.307 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:40.307 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.307 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.307 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.566 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.566 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:40.566 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.566 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.566 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.825 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2457230 00:14:40.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2457230) - No such process 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2457230 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.825 rmmod nvme_tcp 00:14:40.825 rmmod nvme_fabrics 00:14:40.825 rmmod nvme_keyring 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2456953 ']' 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2456953 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2456953 ']' 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2456953 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.825 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2456953 00:14:41.084 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:41.084 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:41.084 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2456953' 00:14:41.084 killing process with pid 2456953 00:14:41.084 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2456953 00:14:41.084 18:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2456953 00:14:41.084 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.084 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.084 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.084 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.084 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.084 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.084 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.084 18:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:43.618 00:14:43.618 real 0m19.656s 00:14:43.618 user 0m41.395s 00:14:43.618 sys 0m8.279s 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.618 ************************************ 00:14:43.618 END TEST nvmf_connect_stress 00:14:43.618 ************************************ 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.618 ************************************ 00:14:43.618 START TEST nvmf_fused_ordering 00:14:43.618 ************************************ 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:43.618 * Looking for test storage... 00:14:43.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.618 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.619 18:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:50.225 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:50.225 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:50.225 Found net devices under 0000:af:00.0: cvl_0_0 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:50.225 Found net devices under 0000:af:00.1: cvl_0_1 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.225 18:51:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.225 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.225 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.225 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:50.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:14:50.226 00:14:50.226 --- 10.0.0.2 ping statistics --- 00:14:50.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.226 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:14:50.226 00:14:50.226 --- 10.0.0.1 ping statistics --- 00:14:50.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.226 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2462627 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2462627 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2462627 ']' 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.226 18:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.226 [2024-07-24 18:51:34.351863] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:14:50.226 [2024-07-24 18:51:34.351926] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.226 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.226 [2024-07-24 18:51:34.438486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.226 [2024-07-24 18:51:34.540384] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.226 [2024-07-24 18:51:34.540434] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.226 [2024-07-24 18:51:34.540447] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.226 [2024-07-24 18:51:34.540458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.226 [2024-07-24 18:51:34.540469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.226 [2024-07-24 18:51:34.540509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.485 [2024-07-24 18:51:35.348557] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.485 [2024-07-24 18:51:35.368768] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.485 NULL1 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.485 18:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:50.485 [2024-07-24 18:51:35.429675] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:14:50.485 [2024-07-24 18:51:35.429746] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2462826 ] 00:14:50.744 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.004 Attached to nqn.2016-06.io.spdk:cnode1 00:14:51.004 Namespace ID: 1 size: 1GB 00:14:51.004 fused_ordering(0) 00:14:51.004 fused_ordering(1) 00:14:51.004 fused_ordering(2) 00:14:51.004 fused_ordering(3) 00:14:51.004 fused_ordering(4) 00:14:51.004 fused_ordering(5) 00:14:51.004 fused_ordering(6) 00:14:51.004 fused_ordering(7) 00:14:51.004 fused_ordering(8) 00:14:51.004 fused_ordering(9) 00:14:51.004 fused_ordering(10) 00:14:51.004 fused_ordering(11) 00:14:51.004 fused_ordering(12) 00:14:51.004 fused_ordering(13) 00:14:51.004 fused_ordering(14) 00:14:51.004 fused_ordering(15) 00:14:51.004 fused_ordering(16) 00:14:51.004 fused_ordering(17) 00:14:51.004 fused_ordering(18) 00:14:51.004 fused_ordering(19) 00:14:51.004 fused_ordering(20) 00:14:51.004 fused_ordering(21) 00:14:51.004 fused_ordering(22) 00:14:51.004 fused_ordering(23) 00:14:51.004 fused_ordering(24) 00:14:51.004 fused_ordering(25) 00:14:51.004 fused_ordering(26) 00:14:51.004 fused_ordering(27) 00:14:51.004 fused_ordering(28) 00:14:51.004 fused_ordering(29) 00:14:51.004 fused_ordering(30) 00:14:51.004 fused_ordering(31) 00:14:51.004 fused_ordering(32) 00:14:51.004 fused_ordering(33) 00:14:51.004 fused_ordering(34) 00:14:51.004 fused_ordering(35) 00:14:51.004 fused_ordering(36) 00:14:51.004 fused_ordering(37) 00:14:51.004 fused_ordering(38) 00:14:51.004 fused_ordering(39) 00:14:51.004 fused_ordering(40) 00:14:51.004 fused_ordering(41) 00:14:51.004 fused_ordering(42) 00:14:51.004 fused_ordering(43) 00:14:51.004 fused_ordering(44) 00:14:51.004 fused_ordering(45) 00:14:51.004 fused_ordering(46) 00:14:51.004 fused_ordering(47) 00:14:51.004 fused_ordering(48) 00:14:51.004 fused_ordering(49) 00:14:51.004 fused_ordering(50) 00:14:51.004 fused_ordering(51) 00:14:51.004 fused_ordering(52) 00:14:51.004 fused_ordering(53) 00:14:51.004 fused_ordering(54) 00:14:51.004 fused_ordering(55) 00:14:51.004 fused_ordering(56) 00:14:51.004 fused_ordering(57) 00:14:51.004 fused_ordering(58) 00:14:51.004 fused_ordering(59) 00:14:51.004 fused_ordering(60) 00:14:51.004 fused_ordering(61) 00:14:51.004 fused_ordering(62) 00:14:51.004 fused_ordering(63) 00:14:51.004 fused_ordering(64) 00:14:51.004 fused_ordering(65) 00:14:51.004 fused_ordering(66) 00:14:51.004 fused_ordering(67) 00:14:51.004 fused_ordering(68) 00:14:51.004 fused_ordering(69) 00:14:51.004 fused_ordering(70) 00:14:51.004 fused_ordering(71) 00:14:51.004 fused_ordering(72) 00:14:51.004 fused_ordering(73) 00:14:51.004 fused_ordering(74) 00:14:51.004 fused_ordering(75) 00:14:51.004 fused_ordering(76) 00:14:51.004 fused_ordering(77) 00:14:51.004 fused_ordering(78) 00:14:51.004 fused_ordering(79) 00:14:51.004 fused_ordering(80) 00:14:51.004 fused_ordering(81) 00:14:51.004 fused_ordering(82) 00:14:51.004 fused_ordering(83) 00:14:51.004 fused_ordering(84) 00:14:51.004 fused_ordering(85) 00:14:51.004 fused_ordering(86) 00:14:51.004 fused_ordering(87) 00:14:51.004 fused_ordering(88) 00:14:51.004 fused_ordering(89) 00:14:51.004 fused_ordering(90) 00:14:51.004 fused_ordering(91) 00:14:51.004 fused_ordering(92) 00:14:51.004 fused_ordering(93) 00:14:51.004 fused_ordering(94) 00:14:51.004 fused_ordering(95) 00:14:51.004 fused_ordering(96) 00:14:51.004 fused_ordering(97) 00:14:51.004 fused_ordering(98) 00:14:51.004 fused_ordering(99) 00:14:51.004 fused_ordering(100) 00:14:51.004 fused_ordering(101) 00:14:51.004 fused_ordering(102) 00:14:51.004 fused_ordering(103) 00:14:51.004 fused_ordering(104) 00:14:51.004 fused_ordering(105) 00:14:51.004 fused_ordering(106) 00:14:51.004 fused_ordering(107) 00:14:51.004 fused_ordering(108) 00:14:51.004 fused_ordering(109) 00:14:51.004 fused_ordering(110) 00:14:51.004 fused_ordering(111) 00:14:51.004 fused_ordering(112) 00:14:51.004 fused_ordering(113) 00:14:51.004 fused_ordering(114) 00:14:51.004 fused_ordering(115) 00:14:51.004 fused_ordering(116) 00:14:51.004 fused_ordering(117) 00:14:51.005 fused_ordering(118) 00:14:51.005 fused_ordering(119) 00:14:51.005 fused_ordering(120) 00:14:51.005 fused_ordering(121) 00:14:51.005 fused_ordering(122) 00:14:51.005 fused_ordering(123) 00:14:51.005 fused_ordering(124) 00:14:51.005 fused_ordering(125) 00:14:51.005 fused_ordering(126) 00:14:51.005 fused_ordering(127) 00:14:51.005 fused_ordering(128) 00:14:51.005 fused_ordering(129) 00:14:51.005 fused_ordering(130) 00:14:51.005 fused_ordering(131) 00:14:51.005 fused_ordering(132) 00:14:51.005 fused_ordering(133) 00:14:51.005 fused_ordering(134) 00:14:51.005 fused_ordering(135) 00:14:51.005 fused_ordering(136) 00:14:51.005 fused_ordering(137) 00:14:51.005 fused_ordering(138) 00:14:51.005 fused_ordering(139) 00:14:51.005 fused_ordering(140) 00:14:51.005 fused_ordering(141) 00:14:51.005 fused_ordering(142) 00:14:51.005 fused_ordering(143) 00:14:51.005 fused_ordering(144) 00:14:51.005 fused_ordering(145) 00:14:51.005 fused_ordering(146) 00:14:51.005 fused_ordering(147) 00:14:51.005 fused_ordering(148) 00:14:51.005 fused_ordering(149) 00:14:51.005 fused_ordering(150) 00:14:51.005 fused_ordering(151) 00:14:51.005 fused_ordering(152) 00:14:51.005 fused_ordering(153) 00:14:51.005 fused_ordering(154) 00:14:51.005 fused_ordering(155) 00:14:51.005 fused_ordering(156) 00:14:51.005 fused_ordering(157) 00:14:51.005 fused_ordering(158) 00:14:51.005 fused_ordering(159) 00:14:51.005 fused_ordering(160) 00:14:51.005 fused_ordering(161) 00:14:51.005 fused_ordering(162) 00:14:51.005 fused_ordering(163) 00:14:51.005 fused_ordering(164) 00:14:51.005 fused_ordering(165) 00:14:51.005 fused_ordering(166) 00:14:51.005 fused_ordering(167) 00:14:51.005 fused_ordering(168) 00:14:51.005 fused_ordering(169) 00:14:51.005 fused_ordering(170) 00:14:51.005 fused_ordering(171) 00:14:51.005 fused_ordering(172) 00:14:51.005 fused_ordering(173) 00:14:51.005 fused_ordering(174) 00:14:51.005 fused_ordering(175) 00:14:51.005 fused_ordering(176) 00:14:51.005 fused_ordering(177) 00:14:51.005 fused_ordering(178) 00:14:51.005 fused_ordering(179) 00:14:51.005 fused_ordering(180) 00:14:51.005 fused_ordering(181) 00:14:51.005 fused_ordering(182) 00:14:51.005 fused_ordering(183) 00:14:51.005 fused_ordering(184) 00:14:51.005 fused_ordering(185) 00:14:51.005 fused_ordering(186) 00:14:51.005 fused_ordering(187) 00:14:51.005 fused_ordering(188) 00:14:51.005 fused_ordering(189) 00:14:51.005 fused_ordering(190) 00:14:51.005 fused_ordering(191) 00:14:51.005 fused_ordering(192) 00:14:51.005 fused_ordering(193) 00:14:51.005 fused_ordering(194) 00:14:51.005 fused_ordering(195) 00:14:51.005 fused_ordering(196) 00:14:51.005 fused_ordering(197) 00:14:51.005 fused_ordering(198) 00:14:51.005 fused_ordering(199) 00:14:51.005 fused_ordering(200) 00:14:51.005 fused_ordering(201) 00:14:51.005 fused_ordering(202) 00:14:51.005 fused_ordering(203) 00:14:51.005 fused_ordering(204) 00:14:51.005 fused_ordering(205) 00:14:51.571 fused_ordering(206) 00:14:51.571 fused_ordering(207) 00:14:51.571 fused_ordering(208) 00:14:51.571 fused_ordering(209) 00:14:51.571 fused_ordering(210) 00:14:51.571 fused_ordering(211) 00:14:51.571 fused_ordering(212) 00:14:51.571 fused_ordering(213) 00:14:51.571 fused_ordering(214) 00:14:51.571 fused_ordering(215) 00:14:51.571 fused_ordering(216) 00:14:51.571 fused_ordering(217) 00:14:51.571 fused_ordering(218) 00:14:51.571 fused_ordering(219) 00:14:51.571 fused_ordering(220) 00:14:51.571 fused_ordering(221) 00:14:51.571 fused_ordering(222) 00:14:51.571 fused_ordering(223) 00:14:51.571 fused_ordering(224) 00:14:51.571 fused_ordering(225) 00:14:51.571 fused_ordering(226) 00:14:51.571 fused_ordering(227) 00:14:51.571 fused_ordering(228) 00:14:51.571 fused_ordering(229) 00:14:51.571 fused_ordering(230) 00:14:51.571 fused_ordering(231) 00:14:51.571 fused_ordering(232) 00:14:51.571 fused_ordering(233) 00:14:51.571 fused_ordering(234) 00:14:51.571 fused_ordering(235) 00:14:51.571 fused_ordering(236) 00:14:51.571 fused_ordering(237) 00:14:51.571 fused_ordering(238) 00:14:51.572 fused_ordering(239) 00:14:51.572 fused_ordering(240) 00:14:51.572 fused_ordering(241) 00:14:51.572 fused_ordering(242) 00:14:51.572 fused_ordering(243) 00:14:51.572 fused_ordering(244) 00:14:51.572 fused_ordering(245) 00:14:51.572 fused_ordering(246) 00:14:51.572 fused_ordering(247) 00:14:51.572 fused_ordering(248) 00:14:51.572 fused_ordering(249) 00:14:51.572 fused_ordering(250) 00:14:51.572 fused_ordering(251) 00:14:51.572 fused_ordering(252) 00:14:51.572 fused_ordering(253) 00:14:51.572 fused_ordering(254) 00:14:51.572 fused_ordering(255) 00:14:51.572 fused_ordering(256) 00:14:51.572 fused_ordering(257) 00:14:51.572 fused_ordering(258) 00:14:51.572 fused_ordering(259) 00:14:51.572 fused_ordering(260) 00:14:51.572 fused_ordering(261) 00:14:51.572 fused_ordering(262) 00:14:51.572 fused_ordering(263) 00:14:51.572 fused_ordering(264) 00:14:51.572 fused_ordering(265) 00:14:51.572 fused_ordering(266) 00:14:51.572 fused_ordering(267) 00:14:51.572 fused_ordering(268) 00:14:51.572 fused_ordering(269) 00:14:51.572 fused_ordering(270) 00:14:51.572 fused_ordering(271) 00:14:51.572 fused_ordering(272) 00:14:51.572 fused_ordering(273) 00:14:51.572 fused_ordering(274) 00:14:51.572 fused_ordering(275) 00:14:51.572 fused_ordering(276) 00:14:51.572 fused_ordering(277) 00:14:51.572 fused_ordering(278) 00:14:51.572 fused_ordering(279) 00:14:51.572 fused_ordering(280) 00:14:51.572 fused_ordering(281) 00:14:51.572 fused_ordering(282) 00:14:51.572 fused_ordering(283) 00:14:51.572 fused_ordering(284) 00:14:51.572 fused_ordering(285) 00:14:51.572 fused_ordering(286) 00:14:51.572 fused_ordering(287) 00:14:51.572 fused_ordering(288) 00:14:51.572 fused_ordering(289) 00:14:51.572 fused_ordering(290) 00:14:51.572 fused_ordering(291) 00:14:51.572 fused_ordering(292) 00:14:51.572 fused_ordering(293) 00:14:51.572 fused_ordering(294) 00:14:51.572 fused_ordering(295) 00:14:51.572 fused_ordering(296) 00:14:51.572 fused_ordering(297) 00:14:51.572 fused_ordering(298) 00:14:51.572 fused_ordering(299) 00:14:51.572 fused_ordering(300) 00:14:51.572 fused_ordering(301) 00:14:51.572 fused_ordering(302) 00:14:51.572 fused_ordering(303) 00:14:51.572 fused_ordering(304) 00:14:51.572 fused_ordering(305) 00:14:51.572 fused_ordering(306) 00:14:51.572 fused_ordering(307) 00:14:51.572 fused_ordering(308) 00:14:51.572 fused_ordering(309) 00:14:51.572 fused_ordering(310) 00:14:51.572 fused_ordering(311) 00:14:51.572 fused_ordering(312) 00:14:51.572 fused_ordering(313) 00:14:51.572 fused_ordering(314) 00:14:51.572 fused_ordering(315) 00:14:51.572 fused_ordering(316) 00:14:51.572 fused_ordering(317) 00:14:51.572 fused_ordering(318) 00:14:51.572 fused_ordering(319) 00:14:51.572 fused_ordering(320) 00:14:51.572 fused_ordering(321) 00:14:51.572 fused_ordering(322) 00:14:51.572 fused_ordering(323) 00:14:51.572 fused_ordering(324) 00:14:51.572 fused_ordering(325) 00:14:51.572 fused_ordering(326) 00:14:51.572 fused_ordering(327) 00:14:51.572 fused_ordering(328) 00:14:51.572 fused_ordering(329) 00:14:51.572 fused_ordering(330) 00:14:51.572 fused_ordering(331) 00:14:51.572 fused_ordering(332) 00:14:51.572 fused_ordering(333) 00:14:51.572 fused_ordering(334) 00:14:51.572 fused_ordering(335) 00:14:51.572 fused_ordering(336) 00:14:51.572 fused_ordering(337) 00:14:51.572 fused_ordering(338) 00:14:51.572 fused_ordering(339) 00:14:51.572 fused_ordering(340) 00:14:51.572 fused_ordering(341) 00:14:51.572 fused_ordering(342) 00:14:51.572 fused_ordering(343) 00:14:51.572 fused_ordering(344) 00:14:51.572 fused_ordering(345) 00:14:51.572 fused_ordering(346) 00:14:51.572 fused_ordering(347) 00:14:51.572 fused_ordering(348) 00:14:51.572 fused_ordering(349) 00:14:51.572 fused_ordering(350) 00:14:51.572 fused_ordering(351) 00:14:51.572 fused_ordering(352) 00:14:51.572 fused_ordering(353) 00:14:51.572 fused_ordering(354) 00:14:51.572 fused_ordering(355) 00:14:51.572 fused_ordering(356) 00:14:51.572 fused_ordering(357) 00:14:51.572 fused_ordering(358) 00:14:51.572 fused_ordering(359) 00:14:51.572 fused_ordering(360) 00:14:51.572 fused_ordering(361) 00:14:51.572 fused_ordering(362) 00:14:51.572 fused_ordering(363) 00:14:51.572 fused_ordering(364) 00:14:51.572 fused_ordering(365) 00:14:51.572 fused_ordering(366) 00:14:51.572 fused_ordering(367) 00:14:51.572 fused_ordering(368) 00:14:51.572 fused_ordering(369) 00:14:51.572 fused_ordering(370) 00:14:51.572 fused_ordering(371) 00:14:51.572 fused_ordering(372) 00:14:51.572 fused_ordering(373) 00:14:51.572 fused_ordering(374) 00:14:51.572 fused_ordering(375) 00:14:51.572 fused_ordering(376) 00:14:51.572 fused_ordering(377) 00:14:51.572 fused_ordering(378) 00:14:51.572 fused_ordering(379) 00:14:51.572 fused_ordering(380) 00:14:51.572 fused_ordering(381) 00:14:51.572 fused_ordering(382) 00:14:51.572 fused_ordering(383) 00:14:51.572 fused_ordering(384) 00:14:51.572 fused_ordering(385) 00:14:51.572 fused_ordering(386) 00:14:51.572 fused_ordering(387) 00:14:51.572 fused_ordering(388) 00:14:51.572 fused_ordering(389) 00:14:51.572 fused_ordering(390) 00:14:51.572 fused_ordering(391) 00:14:51.572 fused_ordering(392) 00:14:51.573 fused_ordering(393) 00:14:51.573 fused_ordering(394) 00:14:51.573 fused_ordering(395) 00:14:51.573 fused_ordering(396) 00:14:51.573 fused_ordering(397) 00:14:51.573 fused_ordering(398) 00:14:51.573 fused_ordering(399) 00:14:51.573 fused_ordering(400) 00:14:51.573 fused_ordering(401) 00:14:51.573 fused_ordering(402) 00:14:51.573 fused_ordering(403) 00:14:51.573 fused_ordering(404) 00:14:51.573 fused_ordering(405) 00:14:51.573 fused_ordering(406) 00:14:51.573 fused_ordering(407) 00:14:51.573 fused_ordering(408) 00:14:51.573 fused_ordering(409) 00:14:51.573 fused_ordering(410) 00:14:51.832 fused_ordering(411) 00:14:51.832 fused_ordering(412) 00:14:51.832 fused_ordering(413) 00:14:51.832 fused_ordering(414) 00:14:51.832 fused_ordering(415) 00:14:51.832 fused_ordering(416) 00:14:51.832 fused_ordering(417) 00:14:51.832 fused_ordering(418) 00:14:51.832 fused_ordering(419) 00:14:51.832 fused_ordering(420) 00:14:51.832 fused_ordering(421) 00:14:51.832 fused_ordering(422) 00:14:51.832 fused_ordering(423) 00:14:51.832 fused_ordering(424) 00:14:51.832 fused_ordering(425) 00:14:51.832 fused_ordering(426) 00:14:51.832 fused_ordering(427) 00:14:51.832 fused_ordering(428) 00:14:51.832 fused_ordering(429) 00:14:51.832 fused_ordering(430) 00:14:51.832 fused_ordering(431) 00:14:51.832 fused_ordering(432) 00:14:51.832 fused_ordering(433) 00:14:51.832 fused_ordering(434) 00:14:51.832 fused_ordering(435) 00:14:51.832 fused_ordering(436) 00:14:51.832 fused_ordering(437) 00:14:51.832 fused_ordering(438) 00:14:51.832 fused_ordering(439) 00:14:51.832 fused_ordering(440) 00:14:51.832 fused_ordering(441) 00:14:51.832 fused_ordering(442) 00:14:51.832 fused_ordering(443) 00:14:51.832 fused_ordering(444) 00:14:51.832 fused_ordering(445) 00:14:51.832 fused_ordering(446) 00:14:51.832 fused_ordering(447) 00:14:51.832 fused_ordering(448) 00:14:51.833 fused_ordering(449) 00:14:51.833 fused_ordering(450) 00:14:51.833 fused_ordering(451) 00:14:51.833 fused_ordering(452) 00:14:51.833 fused_ordering(453) 00:14:51.833 fused_ordering(454) 00:14:51.833 fused_ordering(455) 00:14:51.833 fused_ordering(456) 00:14:51.833 fused_ordering(457) 00:14:51.833 fused_ordering(458) 00:14:51.833 fused_ordering(459) 00:14:51.833 fused_ordering(460) 00:14:51.833 fused_ordering(461) 00:14:51.833 fused_ordering(462) 00:14:51.833 fused_ordering(463) 00:14:51.833 fused_ordering(464) 00:14:51.833 fused_ordering(465) 00:14:51.833 fused_ordering(466) 00:14:51.833 fused_ordering(467) 00:14:51.833 fused_ordering(468) 00:14:51.833 fused_ordering(469) 00:14:51.833 fused_ordering(470) 00:14:51.833 fused_ordering(471) 00:14:51.833 fused_ordering(472) 00:14:51.833 fused_ordering(473) 00:14:51.833 fused_ordering(474) 00:14:51.833 fused_ordering(475) 00:14:51.833 fused_ordering(476) 00:14:51.833 fused_ordering(477) 00:14:51.833 fused_ordering(478) 00:14:51.833 fused_ordering(479) 00:14:51.833 fused_ordering(480) 00:14:51.833 fused_ordering(481) 00:14:51.833 fused_ordering(482) 00:14:51.833 fused_ordering(483) 00:14:51.833 fused_ordering(484) 00:14:51.833 fused_ordering(485) 00:14:51.833 fused_ordering(486) 00:14:51.833 fused_ordering(487) 00:14:51.833 fused_ordering(488) 00:14:51.833 fused_ordering(489) 00:14:51.833 fused_ordering(490) 00:14:51.833 fused_ordering(491) 00:14:51.833 fused_ordering(492) 00:14:51.833 fused_ordering(493) 00:14:51.833 fused_ordering(494) 00:14:51.833 fused_ordering(495) 00:14:51.833 fused_ordering(496) 00:14:51.833 fused_ordering(497) 00:14:51.833 fused_ordering(498) 00:14:51.833 fused_ordering(499) 00:14:51.833 fused_ordering(500) 00:14:51.833 fused_ordering(501) 00:14:51.833 fused_ordering(502) 00:14:51.833 fused_ordering(503) 00:14:51.833 fused_ordering(504) 00:14:51.833 fused_ordering(505) 00:14:51.833 fused_ordering(506) 00:14:51.833 fused_ordering(507) 00:14:51.833 fused_ordering(508) 00:14:51.833 fused_ordering(509) 00:14:51.833 fused_ordering(510) 00:14:51.833 fused_ordering(511) 00:14:51.833 fused_ordering(512) 00:14:51.833 fused_ordering(513) 00:14:51.833 fused_ordering(514) 00:14:51.833 fused_ordering(515) 00:14:51.833 fused_ordering(516) 00:14:51.833 fused_ordering(517) 00:14:51.833 fused_ordering(518) 00:14:51.833 fused_ordering(519) 00:14:51.833 fused_ordering(520) 00:14:51.833 fused_ordering(521) 00:14:51.833 fused_ordering(522) 00:14:51.833 fused_ordering(523) 00:14:51.833 fused_ordering(524) 00:14:51.833 fused_ordering(525) 00:14:51.833 fused_ordering(526) 00:14:51.833 fused_ordering(527) 00:14:51.833 fused_ordering(528) 00:14:51.833 fused_ordering(529) 00:14:51.833 fused_ordering(530) 00:14:51.833 fused_ordering(531) 00:14:51.833 fused_ordering(532) 00:14:51.833 fused_ordering(533) 00:14:51.833 fused_ordering(534) 00:14:51.833 fused_ordering(535) 00:14:51.833 fused_ordering(536) 00:14:51.833 fused_ordering(537) 00:14:51.833 fused_ordering(538) 00:14:51.833 fused_ordering(539) 00:14:51.833 fused_ordering(540) 00:14:51.833 fused_ordering(541) 00:14:51.833 fused_ordering(542) 00:14:51.833 fused_ordering(543) 00:14:51.833 fused_ordering(544) 00:14:51.833 fused_ordering(545) 00:14:51.833 fused_ordering(546) 00:14:51.833 fused_ordering(547) 00:14:51.833 fused_ordering(548) 00:14:51.833 fused_ordering(549) 00:14:51.833 fused_ordering(550) 00:14:51.833 fused_ordering(551) 00:14:51.833 fused_ordering(552) 00:14:51.833 fused_ordering(553) 00:14:51.833 fused_ordering(554) 00:14:51.833 fused_ordering(555) 00:14:51.833 fused_ordering(556) 00:14:51.833 fused_ordering(557) 00:14:51.833 fused_ordering(558) 00:14:51.833 fused_ordering(559) 00:14:51.833 fused_ordering(560) 00:14:51.833 fused_ordering(561) 00:14:51.833 fused_ordering(562) 00:14:51.833 fused_ordering(563) 00:14:51.833 fused_ordering(564) 00:14:51.833 fused_ordering(565) 00:14:51.833 fused_ordering(566) 00:14:51.833 fused_ordering(567) 00:14:51.833 fused_ordering(568) 00:14:51.833 fused_ordering(569) 00:14:51.833 fused_ordering(570) 00:14:51.833 fused_ordering(571) 00:14:51.833 fused_ordering(572) 00:14:51.833 fused_ordering(573) 00:14:51.833 fused_ordering(574) 00:14:51.833 fused_ordering(575) 00:14:51.833 fused_ordering(576) 00:14:51.833 fused_ordering(577) 00:14:51.833 fused_ordering(578) 00:14:51.833 fused_ordering(579) 00:14:51.833 fused_ordering(580) 00:14:51.833 fused_ordering(581) 00:14:51.833 fused_ordering(582) 00:14:51.833 fused_ordering(583) 00:14:51.833 fused_ordering(584) 00:14:51.833 fused_ordering(585) 00:14:51.833 fused_ordering(586) 00:14:51.833 fused_ordering(587) 00:14:51.833 fused_ordering(588) 00:14:51.833 fused_ordering(589) 00:14:51.833 fused_ordering(590) 00:14:51.833 fused_ordering(591) 00:14:51.833 fused_ordering(592) 00:14:51.833 fused_ordering(593) 00:14:51.833 fused_ordering(594) 00:14:51.833 fused_ordering(595) 00:14:51.833 fused_ordering(596) 00:14:51.833 fused_ordering(597) 00:14:51.833 fused_ordering(598) 00:14:51.833 fused_ordering(599) 00:14:51.833 fused_ordering(600) 00:14:51.833 fused_ordering(601) 00:14:51.833 fused_ordering(602) 00:14:51.833 fused_ordering(603) 00:14:51.834 fused_ordering(604) 00:14:51.834 fused_ordering(605) 00:14:51.834 fused_ordering(606) 00:14:51.834 fused_ordering(607) 00:14:51.834 fused_ordering(608) 00:14:51.834 fused_ordering(609) 00:14:51.834 fused_ordering(610) 00:14:51.834 fused_ordering(611) 00:14:51.834 fused_ordering(612) 00:14:51.834 fused_ordering(613) 00:14:51.834 fused_ordering(614) 00:14:51.834 fused_ordering(615) 00:14:52.770 fused_ordering(616) 00:14:52.770 fused_ordering(617) 00:14:52.770 fused_ordering(618) 00:14:52.770 fused_ordering(619) 00:14:52.770 fused_ordering(620) 00:14:52.770 fused_ordering(621) 00:14:52.770 fused_ordering(622) 00:14:52.770 fused_ordering(623) 00:14:52.770 fused_ordering(624) 00:14:52.770 fused_ordering(625) 00:14:52.770 fused_ordering(626) 00:14:52.770 fused_ordering(627) 00:14:52.770 fused_ordering(628) 00:14:52.770 fused_ordering(629) 00:14:52.770 fused_ordering(630) 00:14:52.770 fused_ordering(631) 00:14:52.770 fused_ordering(632) 00:14:52.770 fused_ordering(633) 00:14:52.770 fused_ordering(634) 00:14:52.770 fused_ordering(635) 00:14:52.770 fused_ordering(636) 00:14:52.770 fused_ordering(637) 00:14:52.770 fused_ordering(638) 00:14:52.770 fused_ordering(639) 00:14:52.770 fused_ordering(640) 00:14:52.770 fused_ordering(641) 00:14:52.770 fused_ordering(642) 00:14:52.770 fused_ordering(643) 00:14:52.770 fused_ordering(644) 00:14:52.770 fused_ordering(645) 00:14:52.770 fused_ordering(646) 00:14:52.770 fused_ordering(647) 00:14:52.770 fused_ordering(648) 00:14:52.770 fused_ordering(649) 00:14:52.770 fused_ordering(650) 00:14:52.770 fused_ordering(651) 00:14:52.770 fused_ordering(652) 00:14:52.770 fused_ordering(653) 00:14:52.770 fused_ordering(654) 00:14:52.770 fused_ordering(655) 00:14:52.770 fused_ordering(656) 00:14:52.770 fused_ordering(657) 00:14:52.770 fused_ordering(658) 00:14:52.770 fused_ordering(659) 00:14:52.770 fused_ordering(660) 00:14:52.770 fused_ordering(661) 00:14:52.770 fused_ordering(662) 00:14:52.770 fused_ordering(663) 00:14:52.770 fused_ordering(664) 00:14:52.770 fused_ordering(665) 00:14:52.770 fused_ordering(666) 00:14:52.770 fused_ordering(667) 00:14:52.770 fused_ordering(668) 00:14:52.770 fused_ordering(669) 00:14:52.770 fused_ordering(670) 00:14:52.770 fused_ordering(671) 00:14:52.770 fused_ordering(672) 00:14:52.770 fused_ordering(673) 00:14:52.770 fused_ordering(674) 00:14:52.770 fused_ordering(675) 00:14:52.770 fused_ordering(676) 00:14:52.770 fused_ordering(677) 00:14:52.770 fused_ordering(678) 00:14:52.771 fused_ordering(679) 00:14:52.771 fused_ordering(680) 00:14:52.771 fused_ordering(681) 00:14:52.771 fused_ordering(682) 00:14:52.771 fused_ordering(683) 00:14:52.771 fused_ordering(684) 00:14:52.771 fused_ordering(685) 00:14:52.771 fused_ordering(686) 00:14:52.771 fused_ordering(687) 00:14:52.771 fused_ordering(688) 00:14:52.771 fused_ordering(689) 00:14:52.771 fused_ordering(690) 00:14:52.771 fused_ordering(691) 00:14:52.771 fused_ordering(692) 00:14:52.771 fused_ordering(693) 00:14:52.771 fused_ordering(694) 00:14:52.771 fused_ordering(695) 00:14:52.771 fused_ordering(696) 00:14:52.771 fused_ordering(697) 00:14:52.771 fused_ordering(698) 00:14:52.771 fused_ordering(699) 00:14:52.771 fused_ordering(700) 00:14:52.771 fused_ordering(701) 00:14:52.771 fused_ordering(702) 00:14:52.771 fused_ordering(703) 00:14:52.771 fused_ordering(704) 00:14:52.771 fused_ordering(705) 00:14:52.771 fused_ordering(706) 00:14:52.771 fused_ordering(707) 00:14:52.771 fused_ordering(708) 00:14:52.771 fused_ordering(709) 00:14:52.771 fused_ordering(710) 00:14:52.771 fused_ordering(711) 00:14:52.771 fused_ordering(712) 00:14:52.771 fused_ordering(713) 00:14:52.771 fused_ordering(714) 00:14:52.771 fused_ordering(715) 00:14:52.771 fused_ordering(716) 00:14:52.771 fused_ordering(717) 00:14:52.771 fused_ordering(718) 00:14:52.771 fused_ordering(719) 00:14:52.771 fused_ordering(720) 00:14:52.771 fused_ordering(721) 00:14:52.771 fused_ordering(722) 00:14:52.771 fused_ordering(723) 00:14:52.771 fused_ordering(724) 00:14:52.771 fused_ordering(725) 00:14:52.771 fused_ordering(726) 00:14:52.771 fused_ordering(727) 00:14:52.771 fused_ordering(728) 00:14:52.771 fused_ordering(729) 00:14:52.771 fused_ordering(730) 00:14:52.771 fused_ordering(731) 00:14:52.771 fused_ordering(732) 00:14:52.771 fused_ordering(733) 00:14:52.771 fused_ordering(734) 00:14:52.771 fused_ordering(735) 00:14:52.771 fused_ordering(736) 00:14:52.771 fused_ordering(737) 00:14:52.771 fused_ordering(738) 00:14:52.771 fused_ordering(739) 00:14:52.771 fused_ordering(740) 00:14:52.771 fused_ordering(741) 00:14:52.771 fused_ordering(742) 00:14:52.771 fused_ordering(743) 00:14:52.771 fused_ordering(744) 00:14:52.771 fused_ordering(745) 00:14:52.771 fused_ordering(746) 00:14:52.771 fused_ordering(747) 00:14:52.771 fused_ordering(748) 00:14:52.771 fused_ordering(749) 00:14:52.771 fused_ordering(750) 00:14:52.771 fused_ordering(751) 00:14:52.771 fused_ordering(752) 00:14:52.771 fused_ordering(753) 00:14:52.771 fused_ordering(754) 00:14:52.771 fused_ordering(755) 00:14:52.771 fused_ordering(756) 00:14:52.771 fused_ordering(757) 00:14:52.771 fused_ordering(758) 00:14:52.771 fused_ordering(759) 00:14:52.771 fused_ordering(760) 00:14:52.771 fused_ordering(761) 00:14:52.771 fused_ordering(762) 00:14:52.771 fused_ordering(763) 00:14:52.771 fused_ordering(764) 00:14:52.771 fused_ordering(765) 00:14:52.771 fused_ordering(766) 00:14:52.771 fused_ordering(767) 00:14:52.771 fused_ordering(768) 00:14:52.771 fused_ordering(769) 00:14:52.771 fused_ordering(770) 00:14:52.771 fused_ordering(771) 00:14:52.771 fused_ordering(772) 00:14:52.771 fused_ordering(773) 00:14:52.771 fused_ordering(774) 00:14:52.771 fused_ordering(775) 00:14:52.771 fused_ordering(776) 00:14:52.771 fused_ordering(777) 00:14:52.771 fused_ordering(778) 00:14:52.771 fused_ordering(779) 00:14:52.771 fused_ordering(780) 00:14:52.771 fused_ordering(781) 00:14:52.771 fused_ordering(782) 00:14:52.771 fused_ordering(783) 00:14:52.771 fused_ordering(784) 00:14:52.771 fused_ordering(785) 00:14:52.771 fused_ordering(786) 00:14:52.771 fused_ordering(787) 00:14:52.771 fused_ordering(788) 00:14:52.771 fused_ordering(789) 00:14:52.771 fused_ordering(790) 00:14:52.771 fused_ordering(791) 00:14:52.771 fused_ordering(792) 00:14:52.771 fused_ordering(793) 00:14:52.771 fused_ordering(794) 00:14:52.771 fused_ordering(795) 00:14:52.771 fused_ordering(796) 00:14:52.771 fused_ordering(797) 00:14:52.771 fused_ordering(798) 00:14:52.771 fused_ordering(799) 00:14:52.771 fused_ordering(800) 00:14:52.771 fused_ordering(801) 00:14:52.771 fused_ordering(802) 00:14:52.771 fused_ordering(803) 00:14:52.771 fused_ordering(804) 00:14:52.771 fused_ordering(805) 00:14:52.771 fused_ordering(806) 00:14:52.771 fused_ordering(807) 00:14:52.771 fused_ordering(808) 00:14:52.771 fused_ordering(809) 00:14:52.771 fused_ordering(810) 00:14:52.771 fused_ordering(811) 00:14:52.771 fused_ordering(812) 00:14:52.771 fused_ordering(813) 00:14:52.771 fused_ordering(814) 00:14:52.771 fused_ordering(815) 00:14:52.771 fused_ordering(816) 00:14:52.771 fused_ordering(817) 00:14:52.771 fused_ordering(818) 00:14:52.771 fused_ordering(819) 00:14:52.771 fused_ordering(820) 00:14:53.339 fused_ordering(821) 00:14:53.339 fused_ordering(822) 00:14:53.339 fused_ordering(823) 00:14:53.339 fused_ordering(824) 00:14:53.339 fused_ordering(825) 00:14:53.339 fused_ordering(826) 00:14:53.339 fused_ordering(827) 00:14:53.339 fused_ordering(828) 00:14:53.339 fused_ordering(829) 00:14:53.339 fused_ordering(830) 00:14:53.339 fused_ordering(831) 00:14:53.339 fused_ordering(832) 00:14:53.339 fused_ordering(833) 00:14:53.339 fused_ordering(834) 00:14:53.339 fused_ordering(835) 00:14:53.339 fused_ordering(836) 00:14:53.339 fused_ordering(837) 00:14:53.339 fused_ordering(838) 00:14:53.339 fused_ordering(839) 00:14:53.339 fused_ordering(840) 00:14:53.339 fused_ordering(841) 00:14:53.339 fused_ordering(842) 00:14:53.339 fused_ordering(843) 00:14:53.339 fused_ordering(844) 00:14:53.339 fused_ordering(845) 00:14:53.339 fused_ordering(846) 00:14:53.339 fused_ordering(847) 00:14:53.339 fused_ordering(848) 00:14:53.339 fused_ordering(849) 00:14:53.339 fused_ordering(850) 00:14:53.339 fused_ordering(851) 00:14:53.339 fused_ordering(852) 00:14:53.339 fused_ordering(853) 00:14:53.339 fused_ordering(854) 00:14:53.339 fused_ordering(855) 00:14:53.339 fused_ordering(856) 00:14:53.339 fused_ordering(857) 00:14:53.339 fused_ordering(858) 00:14:53.339 fused_ordering(859) 00:14:53.339 fused_ordering(860) 00:14:53.339 fused_ordering(861) 00:14:53.339 fused_ordering(862) 00:14:53.339 fused_ordering(863) 00:14:53.339 fused_ordering(864) 00:14:53.339 fused_ordering(865) 00:14:53.339 fused_ordering(866) 00:14:53.339 fused_ordering(867) 00:14:53.339 fused_ordering(868) 00:14:53.339 fused_ordering(869) 00:14:53.339 fused_ordering(870) 00:14:53.339 fused_ordering(871) 00:14:53.339 fused_ordering(872) 00:14:53.339 fused_ordering(873) 00:14:53.339 fused_ordering(874) 00:14:53.339 fused_ordering(875) 00:14:53.339 fused_ordering(876) 00:14:53.339 fused_ordering(877) 00:14:53.339 fused_ordering(878) 00:14:53.339 fused_ordering(879) 00:14:53.339 fused_ordering(880) 00:14:53.339 fused_ordering(881) 00:14:53.339 fused_ordering(882) 00:14:53.339 fused_ordering(883) 00:14:53.339 fused_ordering(884) 00:14:53.339 fused_ordering(885) 00:14:53.339 fused_ordering(886) 00:14:53.339 fused_ordering(887) 00:14:53.339 fused_ordering(888) 00:14:53.339 fused_ordering(889) 00:14:53.339 fused_ordering(890) 00:14:53.339 fused_ordering(891) 00:14:53.339 fused_ordering(892) 00:14:53.339 fused_ordering(893) 00:14:53.339 fused_ordering(894) 00:14:53.339 fused_ordering(895) 00:14:53.339 fused_ordering(896) 00:14:53.339 fused_ordering(897) 00:14:53.339 fused_ordering(898) 00:14:53.339 fused_ordering(899) 00:14:53.339 fused_ordering(900) 00:14:53.339 fused_ordering(901) 00:14:53.339 fused_ordering(902) 00:14:53.339 fused_ordering(903) 00:14:53.339 fused_ordering(904) 00:14:53.339 fused_ordering(905) 00:14:53.339 fused_ordering(906) 00:14:53.339 fused_ordering(907) 00:14:53.339 fused_ordering(908) 00:14:53.339 fused_ordering(909) 00:14:53.339 fused_ordering(910) 00:14:53.339 fused_ordering(911) 00:14:53.339 fused_ordering(912) 00:14:53.339 fused_ordering(913) 00:14:53.339 fused_ordering(914) 00:14:53.339 fused_ordering(915) 00:14:53.339 fused_ordering(916) 00:14:53.339 fused_ordering(917) 00:14:53.339 fused_ordering(918) 00:14:53.339 fused_ordering(919) 00:14:53.339 fused_ordering(920) 00:14:53.339 fused_ordering(921) 00:14:53.339 fused_ordering(922) 00:14:53.339 fused_ordering(923) 00:14:53.339 fused_ordering(924) 00:14:53.339 fused_ordering(925) 00:14:53.339 fused_ordering(926) 00:14:53.339 fused_ordering(927) 00:14:53.339 fused_ordering(928) 00:14:53.339 fused_ordering(929) 00:14:53.339 fused_ordering(930) 00:14:53.339 fused_ordering(931) 00:14:53.339 fused_ordering(932) 00:14:53.339 fused_ordering(933) 00:14:53.339 fused_ordering(934) 00:14:53.339 fused_ordering(935) 00:14:53.339 fused_ordering(936) 00:14:53.339 fused_ordering(937) 00:14:53.339 fused_ordering(938) 00:14:53.339 fused_ordering(939) 00:14:53.339 fused_ordering(940) 00:14:53.339 fused_ordering(941) 00:14:53.339 fused_ordering(942) 00:14:53.339 fused_ordering(943) 00:14:53.339 fused_ordering(944) 00:14:53.339 fused_ordering(945) 00:14:53.339 fused_ordering(946) 00:14:53.339 fused_ordering(947) 00:14:53.339 fused_ordering(948) 00:14:53.339 fused_ordering(949) 00:14:53.339 fused_ordering(950) 00:14:53.339 fused_ordering(951) 00:14:53.339 fused_ordering(952) 00:14:53.339 fused_ordering(953) 00:14:53.339 fused_ordering(954) 00:14:53.339 fused_ordering(955) 00:14:53.339 fused_ordering(956) 00:14:53.339 fused_ordering(957) 00:14:53.339 fused_ordering(958) 00:14:53.339 fused_ordering(959) 00:14:53.339 fused_ordering(960) 00:14:53.339 fused_ordering(961) 00:14:53.339 fused_ordering(962) 00:14:53.339 fused_ordering(963) 00:14:53.339 fused_ordering(964) 00:14:53.339 fused_ordering(965) 00:14:53.339 fused_ordering(966) 00:14:53.339 fused_ordering(967) 00:14:53.340 fused_ordering(968) 00:14:53.340 fused_ordering(969) 00:14:53.340 fused_ordering(970) 00:14:53.340 fused_ordering(971) 00:14:53.340 fused_ordering(972) 00:14:53.340 fused_ordering(973) 00:14:53.340 fused_ordering(974) 00:14:53.340 fused_ordering(975) 00:14:53.340 fused_ordering(976) 00:14:53.340 fused_ordering(977) 00:14:53.340 fused_ordering(978) 00:14:53.340 fused_ordering(979) 00:14:53.340 fused_ordering(980) 00:14:53.340 fused_ordering(981) 00:14:53.340 fused_ordering(982) 00:14:53.340 fused_ordering(983) 00:14:53.340 fused_ordering(984) 00:14:53.340 fused_ordering(985) 00:14:53.340 fused_ordering(986) 00:14:53.340 fused_ordering(987) 00:14:53.340 fused_ordering(988) 00:14:53.340 fused_ordering(989) 00:14:53.340 fused_ordering(990) 00:14:53.340 fused_ordering(991) 00:14:53.340 fused_ordering(992) 00:14:53.340 fused_ordering(993) 00:14:53.340 fused_ordering(994) 00:14:53.340 fused_ordering(995) 00:14:53.340 fused_ordering(996) 00:14:53.340 fused_ordering(997) 00:14:53.340 fused_ordering(998) 00:14:53.340 fused_ordering(999) 00:14:53.340 fused_ordering(1000) 00:14:53.340 fused_ordering(1001) 00:14:53.340 fused_ordering(1002) 00:14:53.340 fused_ordering(1003) 00:14:53.340 fused_ordering(1004) 00:14:53.340 fused_ordering(1005) 00:14:53.340 fused_ordering(1006) 00:14:53.340 fused_ordering(1007) 00:14:53.340 fused_ordering(1008) 00:14:53.340 fused_ordering(1009) 00:14:53.340 fused_ordering(1010) 00:14:53.340 fused_ordering(1011) 00:14:53.340 fused_ordering(1012) 00:14:53.340 fused_ordering(1013) 00:14:53.340 fused_ordering(1014) 00:14:53.340 fused_ordering(1015) 00:14:53.340 fused_ordering(1016) 00:14:53.340 fused_ordering(1017) 00:14:53.340 fused_ordering(1018) 00:14:53.340 fused_ordering(1019) 00:14:53.340 fused_ordering(1020) 00:14:53.340 fused_ordering(1021) 00:14:53.340 fused_ordering(1022) 00:14:53.340 fused_ordering(1023) 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.340 rmmod nvme_tcp 00:14:53.340 rmmod nvme_fabrics 00:14:53.340 rmmod nvme_keyring 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2462627 ']' 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2462627 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2462627 ']' 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2462627 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2462627 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2462627' 00:14:53.340 killing process with pid 2462627 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2462627 00:14:53.340 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2462627 00:14:53.599 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.599 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.599 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.599 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.599 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.599 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.599 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.599 18:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.135 00:14:56.135 real 0m12.393s 00:14:56.135 user 0m7.324s 00:14:56.135 sys 0m6.451s 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:56.135 ************************************ 00:14:56.135 END TEST nvmf_fused_ordering 00:14:56.135 ************************************ 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.135 ************************************ 00:14:56.135 START TEST nvmf_ns_masking 00:14:56.135 ************************************ 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:56.135 * Looking for test storage... 00:14:56.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fbf8885e-fde6-4804-906e-903ea4bd4fb2 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f490fc24-58cb-4a74-94c7-383900cea86d 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=11175bf7-ef79-4818-b058-143e533787ff 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.135 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.136 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:56.136 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:56.136 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:56.136 18:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:01.407 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.407 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:01.408 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:01.408 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:01.408 Found net devices under 0000:af:00.0: cvl_0_0 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:01.408 Found net devices under 0000:af:00.1: cvl_0_1 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.408 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:01.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:15:01.667 00:15:01.667 --- 10.0.0.2 ping statistics --- 00:15:01.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.667 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:15:01.667 00:15:01.667 --- 10.0.0.1 ping statistics --- 00:15:01.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.667 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:01.667 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2467058 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2467058 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2467058 ']' 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.926 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:01.926 [2024-07-24 18:51:46.760102] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:15:01.926 [2024-07-24 18:51:46.760161] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.926 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.926 [2024-07-24 18:51:46.846495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.185 [2024-07-24 18:51:46.936438] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.185 [2024-07-24 18:51:46.936479] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.185 [2024-07-24 18:51:46.936489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.185 [2024-07-24 18:51:46.936498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.185 [2024-07-24 18:51:46.936505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.185 [2024-07-24 18:51:46.936536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.752 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.752 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:02.752 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.752 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.752 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.752 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.752 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:03.010 [2024-07-24 18:51:47.967408] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.010 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:03.010 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:03.010 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:03.269 Malloc1 00:15:03.269 18:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:03.527 Malloc2 00:15:03.527 18:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:03.786 18:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:04.044 18:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.303 [2024-07-24 18:51:49.226194] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.303 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:04.303 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11175bf7-ef79-4818-b058-143e533787ff -a 10.0.0.2 -s 4420 -i 4 00:15:04.561 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:04.561 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:15:04.561 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.561 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:04.561 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:15:06.463 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:06.463 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:06.463 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:06.722 [ 0]:0x1 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8869ba4f1b4a43f9a9fa6c290de46107 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8869ba4f1b4a43f9a9fa6c290de46107 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.722 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:06.981 [ 0]:0x1 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8869ba4f1b4a43f9a9fa6c290de46107 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8869ba4f1b4a43f9a9fa6c290de46107 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:06.981 [ 1]:0x2 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=474788d59a4e4a259879e5c5c8b13690 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 474788d59a4e4a259879e5c5c8b13690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:06.981 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.240 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.499 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:07.758 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:07.758 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11175bf7-ef79-4818-b058-143e533787ff -a 10.0.0.2 -s 4420 -i 4 00:15:07.758 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:07.758 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:15:07.758 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.758 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 1 ]] 00:15:07.758 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=1 00:15:07.758 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:10.344 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:10.345 [ 0]:0x2 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=474788d59a4e4a259879e5c5c8b13690 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 474788d59a4e4a259879e5c5c8b13690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.345 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:10.345 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:10.345 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.345 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:10.345 [ 0]:0x1 00:15:10.345 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.345 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:10.346 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8869ba4f1b4a43f9a9fa6c290de46107 00:15:10.346 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8869ba4f1b4a43f9a9fa6c290de46107 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.346 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:10.346 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.346 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:10.346 [ 1]:0x2 00:15:10.346 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:10.346 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:10.346 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=474788d59a4e4a259879e5c5c8b13690 00:15:10.346 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 474788d59a4e4a259879e5c5c8b13690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.346 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:10.607 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:10.607 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:10.607 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:10.607 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:10.608 [ 0]:0x2 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:10.608 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:10.866 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=474788d59a4e4a259879e5c5c8b13690 00:15:10.866 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 474788d59a4e4a259879e5c5c8b13690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.866 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:10.866 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.866 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:11.125 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:11.125 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11175bf7-ef79-4818-b058-143e533787ff -a 10.0.0.2 -s 4420 -i 4 00:15:11.125 18:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:11.125 18:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:15:11.125 18:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.125 18:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:15:11.125 18:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:15:11.125 18:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:13.657 [ 0]:0x1 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8869ba4f1b4a43f9a9fa6c290de46107 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8869ba4f1b4a43f9a9fa6c290de46107 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:13.657 [ 1]:0x2 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=474788d59a4e4a259879e5c5c8b13690 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 474788d59a4e4a259879e5c5c8b13690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.657 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:13.915 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:13.915 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:13.915 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:13.915 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:13.915 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.915 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:13.915 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:13.916 [ 0]:0x2 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=474788d59a4e4a259879e5c5c8b13690 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 474788d59a4e4a259879e5c5c8b13690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:13.916 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:14.175 [2024-07-24 18:51:59.084060] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:14.175 request: 00:15:14.175 { 00:15:14.175 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.175 "nsid": 2, 00:15:14.175 "host": "nqn.2016-06.io.spdk:host1", 00:15:14.175 "method": "nvmf_ns_remove_host", 00:15:14.175 "req_id": 1 00:15:14.175 } 00:15:14.175 Got JSON-RPC error response 00:15:14.175 response: 00:15:14.175 { 00:15:14.175 "code": -32602, 00:15:14.175 "message": "Invalid parameters" 00:15:14.175 } 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:14.175 [ 0]:0x2 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.175 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=474788d59a4e4a259879e5c5c8b13690 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 474788d59a4e4a259879e5c5c8b13690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2469345 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2469345 /var/tmp/host.sock 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2469345 ']' 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:14.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.434 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:14.434 [2024-07-24 18:51:59.312028] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:15:14.434 [2024-07-24 18:51:59.312089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469345 ] 00:15:14.434 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.434 [2024-07-24 18:51:59.394127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.694 [2024-07-24 18:51:59.494643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.261 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.261 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:15.261 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.519 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:15.777 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fbf8885e-fde6-4804-906e-903ea4bd4fb2 00:15:15.777 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:16.036 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FBF8885EFDE64804906E903EA4BD4FB2 -i 00:15:16.036 18:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f490fc24-58cb-4a74-94c7-383900cea86d 00:15:16.036 18:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:16.295 18:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F490FC2458CB4A7494C7383900CEA86D -i 00:15:16.553 18:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:16.553 18:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:16.811 18:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:16.811 18:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:17.378 nvme0n1 00:15:17.378 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:17.378 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:17.945 nvme1n2 00:15:17.945 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:17.945 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:17.945 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:17.945 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:17.945 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:18.204 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:18.204 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:18.204 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:18.204 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:18.463 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fbf8885e-fde6-4804-906e-903ea4bd4fb2 == \f\b\f\8\8\8\5\e\-\f\d\e\6\-\4\8\0\4\-\9\0\6\e\-\9\0\3\e\a\4\b\d\4\f\b\2 ]] 00:15:18.463 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:18.463 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:18.463 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f490fc24-58cb-4a74-94c7-383900cea86d == \f\4\9\0\f\c\2\4\-\5\8\c\b\-\4\a\7\4\-\9\4\c\7\-\3\8\3\9\0\0\c\e\a\8\6\d ]] 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2469345 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2469345 ']' 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2469345 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2469345 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2469345' 00:15:18.722 killing process with pid 2469345 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2469345 00:15:18.722 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2469345 00:15:18.980 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.239 rmmod nvme_tcp 00:15:19.239 rmmod nvme_fabrics 00:15:19.239 rmmod nvme_keyring 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2467058 ']' 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2467058 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2467058 ']' 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2467058 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:19.239 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2467058 00:15:19.497 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:19.497 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:19.497 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2467058' 00:15:19.497 killing process with pid 2467058 00:15:19.497 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2467058 00:15:19.497 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2467058 00:15:19.755 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:19.755 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:19.755 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:19.755 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.755 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.755 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.755 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.755 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.657 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:21.657 00:15:21.657 real 0m25.908s 00:15:21.657 user 0m30.557s 00:15:21.657 sys 0m6.881s 00:15:21.657 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.657 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:21.657 ************************************ 00:15:21.657 END TEST nvmf_ns_masking 00:15:21.657 ************************************ 00:15:21.657 18:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:21.657 18:52:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:21.657 18:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.657 18:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.657 18:52:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.915 ************************************ 00:15:21.915 START TEST nvmf_nvme_cli 00:15:21.915 ************************************ 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:21.915 * Looking for test storage... 00:15:21.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.915 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.916 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:28.542 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:28.542 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:28.542 Found net devices under 0000:af:00.0: cvl_0_0 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:28.542 Found net devices under 0000:af:00.1: cvl_0_1 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:28.542 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:28.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:15:28.543 00:15:28.543 --- 10.0.0.2 ping statistics --- 00:15:28.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.543 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:28.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:15:28.543 00:15:28.543 --- 10.0.0.1 ping statistics --- 00:15:28.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.543 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2473882 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2473882 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2473882 ']' 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.543 18:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:28.543 [2024-07-24 18:52:12.750992] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:15:28.543 [2024-07-24 18:52:12.751047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.543 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.543 [2024-07-24 18:52:12.835112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.543 [2024-07-24 18:52:12.927883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.543 [2024-07-24 18:52:12.927924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.543 [2024-07-24 18:52:12.927935] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.543 [2024-07-24 18:52:12.927944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.543 [2024-07-24 18:52:12.927951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.543 [2024-07-24 18:52:12.928050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.543 [2024-07-24 18:52:12.928182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.543 [2024-07-24 18:52:12.928295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.543 [2024-07-24 18:52:12.928296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:28.802 [2024-07-24 18:52:13.742370] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:28.802 Malloc0 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:28.802 Malloc1 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.802 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.062 [2024-07-24 18:52:13.833202] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:29.062 00:15:29.062 Discovery Log Number of Records 2, Generation counter 2 00:15:29.062 =====Discovery Log Entry 0====== 00:15:29.062 trtype: tcp 00:15:29.062 adrfam: ipv4 00:15:29.062 subtype: current discovery subsystem 00:15:29.062 treq: not required 00:15:29.062 portid: 0 00:15:29.062 trsvcid: 4420 00:15:29.062 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:29.062 traddr: 10.0.0.2 00:15:29.062 eflags: explicit discovery connections, duplicate discovery information 00:15:29.062 sectype: none 00:15:29.062 =====Discovery Log Entry 1====== 00:15:29.062 trtype: tcp 00:15:29.062 adrfam: ipv4 00:15:29.062 subtype: nvme subsystem 00:15:29.062 treq: not required 00:15:29.062 portid: 0 00:15:29.062 trsvcid: 4420 00:15:29.062 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:29.062 traddr: 10.0.0.2 00:15:29.062 eflags: none 00:15:29.062 sectype: none 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:29.062 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:30.440 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:30.440 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local i=0 00:15:30.440 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:30.440 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:15:30.440 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:15:30.440 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # sleep 2 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # return 0 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:32.975 /dev/nvme0n1 ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # local i=0 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # return 0 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:32.975 rmmod nvme_tcp 00:15:32.975 rmmod nvme_fabrics 00:15:32.975 rmmod nvme_keyring 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2473882 ']' 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2473882 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2473882 ']' 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2473882 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2473882 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2473882' 00:15:32.975 killing process with pid 2473882 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2473882 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2473882 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.975 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.976 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.511 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:35.511 00:15:35.511 real 0m13.325s 00:15:35.511 user 0m21.589s 00:15:35.511 sys 0m5.061s 00:15:35.511 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:35.511 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.511 ************************************ 00:15:35.511 END TEST nvmf_nvme_cli 00:15:35.511 ************************************ 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:35.511 ************************************ 00:15:35.511 START TEST nvmf_vfio_user 00:15:35.511 ************************************ 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:35.511 * Looking for test storage... 00:15:35.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.511 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2475331 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2475331' 00:15:35.512 Process pid: 2475331 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2475331 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2475331 ']' 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.512 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:35.512 [2024-07-24 18:52:20.248503] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:15:35.512 [2024-07-24 18:52:20.248565] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.512 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.512 [2024-07-24 18:52:20.329758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.512 [2024-07-24 18:52:20.420354] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.512 [2024-07-24 18:52:20.420398] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.512 [2024-07-24 18:52:20.420413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.512 [2024-07-24 18:52:20.420422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.512 [2024-07-24 18:52:20.420429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.512 [2024-07-24 18:52:20.420488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.512 [2024-07-24 18:52:20.420634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.512 [2024-07-24 18:52:20.420744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.512 [2024-07-24 18:52:20.420744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.771 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.771 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:35.771 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:36.709 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:36.968 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:36.968 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:36.968 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.968 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:36.968 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:37.226 Malloc1 00:15:37.226 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:37.484 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:37.743 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:38.002 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.002 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:38.002 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:38.261 Malloc2 00:15:38.261 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:38.520 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:38.779 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:39.038 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:39.038 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:39.038 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.038 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:39.038 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:39.038 18:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:39.038 [2024-07-24 18:52:23.974393] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:15:39.038 [2024-07-24 18:52:23.974428] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476125 ] 00:15:39.038 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.038 [2024-07-24 18:52:24.010131] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:39.038 [2024-07-24 18:52:24.018106] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:39.038 [2024-07-24 18:52:24.018131] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff75572b000 00:15:39.038 [2024-07-24 18:52:24.019105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.038 [2024-07-24 18:52:24.020101] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.038 [2024-07-24 18:52:24.021102] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.038 [2024-07-24 18:52:24.022114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:39.038 [2024-07-24 18:52:24.023114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:39.038 [2024-07-24 18:52:24.024119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.038 [2024-07-24 18:52:24.025132] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:39.038 [2024-07-24 18:52:24.026138] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.038 [2024-07-24 18:52:24.027149] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:39.038 [2024-07-24 18:52:24.027162] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff755720000 00:15:39.038 [2024-07-24 18:52:24.028572] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:39.299 [2024-07-24 18:52:24.050297] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:39.299 [2024-07-24 18:52:24.050331] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:39.299 [2024-07-24 18:52:24.053373] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:39.299 [2024-07-24 18:52:24.053427] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:39.299 [2024-07-24 18:52:24.053534] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:39.299 [2024-07-24 18:52:24.053555] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:39.299 [2024-07-24 18:52:24.053565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:39.299 [2024-07-24 18:52:24.054364] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:39.299 [2024-07-24 18:52:24.054379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:39.299 [2024-07-24 18:52:24.054389] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:39.299 [2024-07-24 18:52:24.055371] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:39.299 [2024-07-24 18:52:24.055383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:39.299 [2024-07-24 18:52:24.055393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:39.299 [2024-07-24 18:52:24.056386] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:39.299 [2024-07-24 18:52:24.056397] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:39.299 [2024-07-24 18:52:24.057388] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:39.299 [2024-07-24 18:52:24.057399] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:39.299 [2024-07-24 18:52:24.057406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:39.299 [2024-07-24 18:52:24.057414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:39.299 [2024-07-24 18:52:24.057522] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:39.299 [2024-07-24 18:52:24.057528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:39.299 [2024-07-24 18:52:24.057535] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:39.299 [2024-07-24 18:52:24.061612] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:39.299 [2024-07-24 18:52:24.062444] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:39.299 [2024-07-24 18:52:24.063448] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:39.299 [2024-07-24 18:52:24.064452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:39.299 [2024-07-24 18:52:24.064585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:39.299 [2024-07-24 18:52:24.065474] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:39.299 [2024-07-24 18:52:24.065484] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:39.299 [2024-07-24 18:52:24.065491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065519] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:39.299 [2024-07-24 18:52:24.065528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065547] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:39.299 [2024-07-24 18:52:24.065555] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.299 [2024-07-24 18:52:24.065560] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.299 [2024-07-24 18:52:24.065576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.299 [2024-07-24 18:52:24.065655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:39.299 [2024-07-24 18:52:24.065668] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:39.299 [2024-07-24 18:52:24.065674] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:39.299 [2024-07-24 18:52:24.065680] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:39.299 [2024-07-24 18:52:24.065686] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:39.299 [2024-07-24 18:52:24.065692] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:39.299 [2024-07-24 18:52:24.065698] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:39.299 [2024-07-24 18:52:24.065704] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:39.299 [2024-07-24 18:52:24.065747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:39.299 [2024-07-24 18:52:24.065764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.299 [2024-07-24 18:52:24.065774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.299 [2024-07-24 18:52:24.065785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.299 [2024-07-24 18:52:24.065795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.299 [2024-07-24 18:52:24.065801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:39.299 [2024-07-24 18:52:24.065837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:39.299 [2024-07-24 18:52:24.065845] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:39.299 [2024-07-24 18:52:24.065854] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:39.299 [2024-07-24 18:52:24.065898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:39.299 [2024-07-24 18:52:24.065973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:39.299 [2024-07-24 18:52:24.065993] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:39.299 [2024-07-24 18:52:24.065999] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:39.299 [2024-07-24 18:52:24.066004] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.299 [2024-07-24 18:52:24.066011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:39.299 [2024-07-24 18:52:24.066042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:39.299 [2024-07-24 18:52:24.066055] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:39.299 [2024-07-24 18:52:24.066069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066089] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:39.300 [2024-07-24 18:52:24.066094] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.300 [2024-07-24 18:52:24.066099] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.300 [2024-07-24 18:52:24.066106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.300 [2024-07-24 18:52:24.066133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:39.300 [2024-07-24 18:52:24.066149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066168] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:39.300 [2024-07-24 18:52:24.066174] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.300 [2024-07-24 18:52:24.066178] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.300 [2024-07-24 18:52:24.066186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.300 [2024-07-24 18:52:24.066214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:39.300 [2024-07-24 18:52:24.066226] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066274] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:39.300 [2024-07-24 18:52:24.066280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:39.300 [2024-07-24 18:52:24.066287] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:39.300 [2024-07-24 18:52:24.066308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:39.300 [2024-07-24 18:52:24.066324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:39.300 [2024-07-24 18:52:24.066340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:39.300 [2024-07-24 18:52:24.066355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:39.300 [2024-07-24 18:52:24.066369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:39.300 [2024-07-24 18:52:24.066386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:39.300 [2024-07-24 18:52:24.066400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:39.300 [2024-07-24 18:52:24.066423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:39.300 [2024-07-24 18:52:24.066440] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:39.300 [2024-07-24 18:52:24.066446] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:39.300 [2024-07-24 18:52:24.066451] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:39.300 [2024-07-24 18:52:24.066455] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:39.300 [2024-07-24 18:52:24.066460] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:39.300 [2024-07-24 18:52:24.066468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:39.300 [2024-07-24 18:52:24.066477] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:39.300 [2024-07-24 18:52:24.066483] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:39.300 [2024-07-24 18:52:24.066487] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.300 [2024-07-24 18:52:24.066497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:39.300 [2024-07-24 18:52:24.066506] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:39.300 [2024-07-24 18:52:24.066512] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.300 [2024-07-24 18:52:24.066516] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.300 [2024-07-24 18:52:24.066523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.300 [2024-07-24 18:52:24.066533] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:39.300 [2024-07-24 18:52:24.066538] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:39.300 [2024-07-24 18:52:24.066542] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.300 [2024-07-24 18:52:24.066550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:39.300 [2024-07-24 18:52:24.066559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:39.300 [2024-07-24 18:52:24.066574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:39.300 [2024-07-24 18:52:24.066589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:39.300 [2024-07-24 18:52:24.066598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:39.300 ===================================================== 00:15:39.300 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:39.300 ===================================================== 00:15:39.300 Controller Capabilities/Features 00:15:39.300 ================================ 00:15:39.300 Vendor ID: 4e58 00:15:39.300 Subsystem Vendor ID: 4e58 00:15:39.300 Serial Number: SPDK1 00:15:39.300 Model Number: SPDK bdev Controller 00:15:39.300 Firmware Version: 24.09 00:15:39.300 Recommended Arb Burst: 6 00:15:39.300 IEEE OUI Identifier: 8d 6b 50 00:15:39.300 Multi-path I/O 00:15:39.300 May have multiple subsystem ports: Yes 00:15:39.300 May have multiple controllers: Yes 00:15:39.300 Associated with SR-IOV VF: No 00:15:39.300 Max Data Transfer Size: 131072 00:15:39.300 Max Number of Namespaces: 32 00:15:39.300 Max Number of I/O Queues: 127 00:15:39.300 NVMe Specification Version (VS): 1.3 00:15:39.300 NVMe Specification Version (Identify): 1.3 00:15:39.300 Maximum Queue Entries: 256 00:15:39.300 Contiguous Queues Required: Yes 00:15:39.300 Arbitration Mechanisms Supported 00:15:39.300 Weighted Round Robin: Not Supported 00:15:39.300 Vendor Specific: Not Supported 00:15:39.300 Reset Timeout: 15000 ms 00:15:39.300 Doorbell Stride: 4 bytes 00:15:39.300 NVM Subsystem Reset: Not Supported 00:15:39.300 Command Sets Supported 00:15:39.300 NVM Command Set: Supported 00:15:39.300 Boot Partition: Not Supported 00:15:39.301 Memory Page Size Minimum: 4096 bytes 00:15:39.301 Memory Page Size Maximum: 4096 bytes 00:15:39.301 Persistent Memory Region: Not Supported 00:15:39.301 Optional Asynchronous Events Supported 00:15:39.301 Namespace Attribute Notices: Supported 00:15:39.301 Firmware Activation Notices: Not Supported 00:15:39.301 ANA Change Notices: Not Supported 00:15:39.301 PLE Aggregate Log Change Notices: Not Supported 00:15:39.301 LBA Status Info Alert Notices: Not Supported 00:15:39.301 EGE Aggregate Log Change Notices: Not Supported 00:15:39.301 Normal NVM Subsystem Shutdown event: Not Supported 00:15:39.301 Zone Descriptor Change Notices: Not Supported 00:15:39.301 Discovery Log Change Notices: Not Supported 00:15:39.301 Controller Attributes 00:15:39.301 128-bit Host Identifier: Supported 00:15:39.301 Non-Operational Permissive Mode: Not Supported 00:15:39.301 NVM Sets: Not Supported 00:15:39.301 Read Recovery Levels: Not Supported 00:15:39.301 Endurance Groups: Not Supported 00:15:39.301 Predictable Latency Mode: Not Supported 00:15:39.301 Traffic Based Keep ALive: Not Supported 00:15:39.301 Namespace Granularity: Not Supported 00:15:39.301 SQ Associations: Not Supported 00:15:39.301 UUID List: Not Supported 00:15:39.301 Multi-Domain Subsystem: Not Supported 00:15:39.301 Fixed Capacity Management: Not Supported 00:15:39.301 Variable Capacity Management: Not Supported 00:15:39.301 Delete Endurance Group: Not Supported 00:15:39.301 Delete NVM Set: Not Supported 00:15:39.301 Extended LBA Formats Supported: Not Supported 00:15:39.301 Flexible Data Placement Supported: Not Supported 00:15:39.301 00:15:39.301 Controller Memory Buffer Support 00:15:39.301 ================================ 00:15:39.301 Supported: No 00:15:39.301 00:15:39.301 Persistent Memory Region Support 00:15:39.301 ================================ 00:15:39.301 Supported: No 00:15:39.301 00:15:39.301 Admin Command Set Attributes 00:15:39.301 ============================ 00:15:39.301 Security Send/Receive: Not Supported 00:15:39.301 Format NVM: Not Supported 00:15:39.301 Firmware Activate/Download: Not Supported 00:15:39.301 Namespace Management: Not Supported 00:15:39.301 Device Self-Test: Not Supported 00:15:39.301 Directives: Not Supported 00:15:39.301 NVMe-MI: Not Supported 00:15:39.301 Virtualization Management: Not Supported 00:15:39.301 Doorbell Buffer Config: Not Supported 00:15:39.301 Get LBA Status Capability: Not Supported 00:15:39.301 Command & Feature Lockdown Capability: Not Supported 00:15:39.301 Abort Command Limit: 4 00:15:39.301 Async Event Request Limit: 4 00:15:39.301 Number of Firmware Slots: N/A 00:15:39.301 Firmware Slot 1 Read-Only: N/A 00:15:39.301 Firmware Activation Without Reset: N/A 00:15:39.301 Multiple Update Detection Support: N/A 00:15:39.301 Firmware Update Granularity: No Information Provided 00:15:39.301 Per-Namespace SMART Log: No 00:15:39.301 Asymmetric Namespace Access Log Page: Not Supported 00:15:39.301 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:39.301 Command Effects Log Page: Supported 00:15:39.301 Get Log Page Extended Data: Supported 00:15:39.301 Telemetry Log Pages: Not Supported 00:15:39.301 Persistent Event Log Pages: Not Supported 00:15:39.301 Supported Log Pages Log Page: May Support 00:15:39.301 Commands Supported & Effects Log Page: Not Supported 00:15:39.301 Feature Identifiers & Effects Log Page:May Support 00:15:39.301 NVMe-MI Commands & Effects Log Page: May Support 00:15:39.301 Data Area 4 for Telemetry Log: Not Supported 00:15:39.301 Error Log Page Entries Supported: 128 00:15:39.301 Keep Alive: Supported 00:15:39.301 Keep Alive Granularity: 10000 ms 00:15:39.301 00:15:39.301 NVM Command Set Attributes 00:15:39.301 ========================== 00:15:39.301 Submission Queue Entry Size 00:15:39.301 Max: 64 00:15:39.301 Min: 64 00:15:39.301 Completion Queue Entry Size 00:15:39.301 Max: 16 00:15:39.301 Min: 16 00:15:39.301 Number of Namespaces: 32 00:15:39.301 Compare Command: Supported 00:15:39.301 Write Uncorrectable Command: Not Supported 00:15:39.301 Dataset Management Command: Supported 00:15:39.301 Write Zeroes Command: Supported 00:15:39.301 Set Features Save Field: Not Supported 00:15:39.301 Reservations: Not Supported 00:15:39.301 Timestamp: Not Supported 00:15:39.301 Copy: Supported 00:15:39.301 Volatile Write Cache: Present 00:15:39.301 Atomic Write Unit (Normal): 1 00:15:39.301 Atomic Write Unit (PFail): 1 00:15:39.301 Atomic Compare & Write Unit: 1 00:15:39.301 Fused Compare & Write: Supported 00:15:39.301 Scatter-Gather List 00:15:39.301 SGL Command Set: Supported (Dword aligned) 00:15:39.301 SGL Keyed: Not Supported 00:15:39.301 SGL Bit Bucket Descriptor: Not Supported 00:15:39.301 SGL Metadata Pointer: Not Supported 00:15:39.301 Oversized SGL: Not Supported 00:15:39.301 SGL Metadata Address: Not Supported 00:15:39.301 SGL Offset: Not Supported 00:15:39.301 Transport SGL Data Block: Not Supported 00:15:39.301 Replay Protected Memory Block: Not Supported 00:15:39.301 00:15:39.301 Firmware Slot Information 00:15:39.301 ========================= 00:15:39.301 Active slot: 1 00:15:39.301 Slot 1 Firmware Revision: 24.09 00:15:39.301 00:15:39.301 00:15:39.301 Commands Supported and Effects 00:15:39.301 ============================== 00:15:39.301 Admin Commands 00:15:39.301 -------------- 00:15:39.301 Get Log Page (02h): Supported 00:15:39.301 Identify (06h): Supported 00:15:39.301 Abort (08h): Supported 00:15:39.301 Set Features (09h): Supported 00:15:39.301 Get Features (0Ah): Supported 00:15:39.301 Asynchronous Event Request (0Ch): Supported 00:15:39.301 Keep Alive (18h): Supported 00:15:39.301 I/O Commands 00:15:39.301 ------------ 00:15:39.301 Flush (00h): Supported LBA-Change 00:15:39.301 Write (01h): Supported LBA-Change 00:15:39.301 Read (02h): Supported 00:15:39.301 Compare (05h): Supported 00:15:39.301 Write Zeroes (08h): Supported LBA-Change 00:15:39.301 Dataset Management (09h): Supported LBA-Change 00:15:39.301 Copy (19h): Supported LBA-Change 00:15:39.301 00:15:39.301 Error Log 00:15:39.301 ========= 00:15:39.301 00:15:39.301 Arbitration 00:15:39.301 =========== 00:15:39.301 Arbitration Burst: 1 00:15:39.301 00:15:39.301 Power Management 00:15:39.301 ================ 00:15:39.301 Number of Power States: 1 00:15:39.301 Current Power State: Power State #0 00:15:39.301 Power State #0: 00:15:39.301 Max Power: 0.00 W 00:15:39.301 Non-Operational State: Operational 00:15:39.301 Entry Latency: Not Reported 00:15:39.301 Exit Latency: Not Reported 00:15:39.301 Relative Read Throughput: 0 00:15:39.301 Relative Read Latency: 0 00:15:39.301 Relative Write Throughput: 0 00:15:39.301 Relative Write Latency: 0 00:15:39.301 Idle Power: Not Reported 00:15:39.301 Active Power: Not Reported 00:15:39.301 Non-Operational Permissive Mode: Not Supported 00:15:39.301 00:15:39.301 Health Information 00:15:39.301 ================== 00:15:39.301 Critical Warnings: 00:15:39.301 Available Spare Space: OK 00:15:39.301 Temperature: OK 00:15:39.301 Device Reliability: OK 00:15:39.301 Read Only: No 00:15:39.301 Volatile Memory Backup: OK 00:15:39.301 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:39.301 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:39.301 Available Spare: 0% 00:15:39.301 Available Sp[2024-07-24 18:52:24.066728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:39.301 [2024-07-24 18:52:24.066750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:39.301 [2024-07-24 18:52:24.066784] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:39.301 [2024-07-24 18:52:24.066796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.301 [2024-07-24 18:52:24.066805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.301 [2024-07-24 18:52:24.066812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.301 [2024-07-24 18:52:24.066820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.301 [2024-07-24 18:52:24.067487] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:39.302 [2024-07-24 18:52:24.067501] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:39.302 [2024-07-24 18:52:24.068492] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:39.302 [2024-07-24 18:52:24.068570] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:39.302 [2024-07-24 18:52:24.068580] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:39.302 [2024-07-24 18:52:24.069497] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:39.302 [2024-07-24 18:52:24.069512] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:39.302 [2024-07-24 18:52:24.069574] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:39.302 [2024-07-24 18:52:24.071548] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:39.302 are Threshold: 0% 00:15:39.302 Life Percentage Used: 0% 00:15:39.302 Data Units Read: 0 00:15:39.302 Data Units Written: 0 00:15:39.302 Host Read Commands: 0 00:15:39.302 Host Write Commands: 0 00:15:39.302 Controller Busy Time: 0 minutes 00:15:39.302 Power Cycles: 0 00:15:39.302 Power On Hours: 0 hours 00:15:39.302 Unsafe Shutdowns: 0 00:15:39.302 Unrecoverable Media Errors: 0 00:15:39.302 Lifetime Error Log Entries: 0 00:15:39.302 Warning Temperature Time: 0 minutes 00:15:39.302 Critical Temperature Time: 0 minutes 00:15:39.302 00:15:39.302 Number of Queues 00:15:39.302 ================ 00:15:39.302 Number of I/O Submission Queues: 127 00:15:39.302 Number of I/O Completion Queues: 127 00:15:39.302 00:15:39.302 Active Namespaces 00:15:39.302 ================= 00:15:39.302 Namespace ID:1 00:15:39.302 Error Recovery Timeout: Unlimited 00:15:39.302 Command Set Identifier: NVM (00h) 00:15:39.302 Deallocate: Supported 00:15:39.302 Deallocated/Unwritten Error: Not Supported 00:15:39.302 Deallocated Read Value: Unknown 00:15:39.302 Deallocate in Write Zeroes: Not Supported 00:15:39.302 Deallocated Guard Field: 0xFFFF 00:15:39.302 Flush: Supported 00:15:39.302 Reservation: Supported 00:15:39.302 Namespace Sharing Capabilities: Multiple Controllers 00:15:39.302 Size (in LBAs): 131072 (0GiB) 00:15:39.302 Capacity (in LBAs): 131072 (0GiB) 00:15:39.302 Utilization (in LBAs): 131072 (0GiB) 00:15:39.302 NGUID: DE35D71EE14A409798FFF4475E19FB15 00:15:39.302 UUID: de35d71e-e14a-4097-98ff-f4475e19fb15 00:15:39.302 Thin Provisioning: Not Supported 00:15:39.302 Per-NS Atomic Units: Yes 00:15:39.302 Atomic Boundary Size (Normal): 0 00:15:39.302 Atomic Boundary Size (PFail): 0 00:15:39.302 Atomic Boundary Offset: 0 00:15:39.302 Maximum Single Source Range Length: 65535 00:15:39.302 Maximum Copy Length: 65535 00:15:39.302 Maximum Source Range Count: 1 00:15:39.302 NGUID/EUI64 Never Reused: No 00:15:39.302 Namespace Write Protected: No 00:15:39.302 Number of LBA Formats: 1 00:15:39.302 Current LBA Format: LBA Format #00 00:15:39.302 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:39.302 00:15:39.302 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:39.302 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.560 [2024-07-24 18:52:24.327855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:44.834 Initializing NVMe Controllers 00:15:44.834 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:44.834 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:44.834 Initialization complete. Launching workers. 00:15:44.834 ======================================================== 00:15:44.834 Latency(us) 00:15:44.834 Device Information : IOPS MiB/s Average min max 00:15:44.834 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 18647.20 72.84 6866.31 2703.32 13413.67 00:15:44.834 ======================================================== 00:15:44.834 Total : 18647.20 72.84 6866.31 2703.32 13413.67 00:15:44.834 00:15:44.834 [2024-07-24 18:52:29.353107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:44.834 18:52:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:44.834 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.834 [2024-07-24 18:52:29.638986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:50.141 Initializing NVMe Controllers 00:15:50.141 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:50.141 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:50.141 Initialization complete. Launching workers. 00:15:50.141 ======================================================== 00:15:50.141 Latency(us) 00:15:50.141 Device Information : IOPS MiB/s Average min max 00:15:50.141 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15575.70 60.84 8217.60 7318.78 15102.67 00:15:50.141 ======================================================== 00:15:50.141 Total : 15575.70 60.84 8217.60 7318.78 15102.67 00:15:50.141 00:15:50.141 [2024-07-24 18:52:34.681067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:50.141 18:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:50.141 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.141 [2024-07-24 18:52:34.982053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:55.422 [2024-07-24 18:52:40.057213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:55.422 Initializing NVMe Controllers 00:15:55.422 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:55.422 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:55.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:55.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:55.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:55.422 Initialization complete. Launching workers. 00:15:55.422 Starting thread on core 2 00:15:55.422 Starting thread on core 3 00:15:55.422 Starting thread on core 1 00:15:55.422 18:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:55.422 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.422 [2024-07-24 18:52:40.406480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:58.705 [2024-07-24 18:52:43.481112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:58.705 Initializing NVMe Controllers 00:15:58.705 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.705 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.705 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:58.705 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:58.705 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:58.705 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:58.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:58.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:58.705 Initialization complete. Launching workers. 00:15:58.705 Starting thread on core 1 with urgent priority queue 00:15:58.705 Starting thread on core 2 with urgent priority queue 00:15:58.705 Starting thread on core 3 with urgent priority queue 00:15:58.705 Starting thread on core 0 with urgent priority queue 00:15:58.705 SPDK bdev Controller (SPDK1 ) core 0: 4366.33 IO/s 22.90 secs/100000 ios 00:15:58.705 SPDK bdev Controller (SPDK1 ) core 1: 4294.00 IO/s 23.29 secs/100000 ios 00:15:58.705 SPDK bdev Controller (SPDK1 ) core 2: 6741.00 IO/s 14.83 secs/100000 ios 00:15:58.705 SPDK bdev Controller (SPDK1 ) core 3: 5947.33 IO/s 16.81 secs/100000 ios 00:15:58.705 ======================================================== 00:15:58.705 00:15:58.705 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:58.705 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.962 [2024-07-24 18:52:43.802695] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:58.962 Initializing NVMe Controllers 00:15:58.962 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.962 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.962 Namespace ID: 1 size: 0GB 00:15:58.962 Initialization complete. 00:15:58.962 INFO: using host memory buffer for IO 00:15:58.962 Hello world! 00:15:58.962 [2024-07-24 18:52:43.837151] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:58.962 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:58.962 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.219 [2024-07-24 18:52:44.165227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:00.596 Initializing NVMe Controllers 00:16:00.596 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:00.596 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:00.596 Initialization complete. Launching workers. 00:16:00.596 submit (in ns) avg, min, max = 10012.8, 4530.0, 4003219.1 00:16:00.596 complete (in ns) avg, min, max = 45169.7, 2716.4, 4009410.9 00:16:00.596 00:16:00.596 Submit histogram 00:16:00.596 ================ 00:16:00.596 Range in us Cumulative Count 00:16:00.596 4.509 - 4.538: 0.0145% ( 1) 00:16:00.596 4.538 - 4.567: 0.5520% ( 37) 00:16:00.596 4.567 - 4.596: 3.1232% ( 177) 00:16:00.596 4.596 - 4.625: 6.9436% ( 263) 00:16:00.596 4.625 - 4.655: 10.5752% ( 250) 00:16:00.596 4.655 - 4.684: 19.8286% ( 637) 00:16:00.596 4.684 - 4.713: 34.3986% ( 1003) 00:16:00.596 4.713 - 4.742: 46.9494% ( 864) 00:16:00.596 4.742 - 4.771: 56.9872% ( 691) 00:16:00.596 4.771 - 4.800: 67.6932% ( 737) 00:16:00.596 4.800 - 4.829: 77.4840% ( 674) 00:16:00.596 4.829 - 4.858: 84.0354% ( 451) 00:16:00.596 4.858 - 4.887: 86.2725% ( 154) 00:16:00.596 4.887 - 4.916: 87.4782% ( 83) 00:16:00.596 4.916 - 4.945: 88.5096% ( 71) 00:16:00.596 4.945 - 4.975: 90.4416% ( 133) 00:16:00.596 4.975 - 5.004: 92.4608% ( 139) 00:16:00.596 5.004 - 5.033: 94.3637% ( 131) 00:16:00.596 5.033 - 5.062: 96.0488% ( 116) 00:16:00.596 5.062 - 5.091: 97.4724% ( 98) 00:16:00.596 5.091 - 5.120: 98.1987% ( 50) 00:16:00.596 5.120 - 5.149: 98.8960% ( 48) 00:16:00.596 5.149 - 5.178: 99.2156% ( 22) 00:16:00.597 5.178 - 5.207: 99.3318% ( 8) 00:16:00.597 5.207 - 5.236: 99.4335% ( 7) 00:16:00.597 5.236 - 5.265: 99.4916% ( 4) 00:16:00.597 5.265 - 5.295: 99.5061% ( 1) 00:16:00.597 5.295 - 5.324: 99.5206% ( 1) 00:16:00.597 5.324 - 5.353: 99.5352% ( 1) 00:16:00.597 5.353 - 5.382: 99.5497% ( 1) 00:16:00.597 8.262 - 8.320: 99.5642% ( 1) 00:16:00.597 8.320 - 8.378: 99.5787% ( 1) 00:16:00.597 8.378 - 8.436: 99.5933% ( 1) 00:16:00.597 8.669 - 8.727: 99.6078% ( 1) 00:16:00.597 8.727 - 8.785: 99.6223% ( 1) 00:16:00.597 8.785 - 8.844: 99.6804% ( 4) 00:16:00.597 9.018 - 9.076: 99.6949% ( 1) 00:16:00.597 9.135 - 9.193: 99.7095% ( 1) 00:16:00.597 9.193 - 9.251: 99.7240% ( 1) 00:16:00.597 9.367 - 9.425: 99.7385% ( 1) 00:16:00.597 9.425 - 9.484: 99.7531% ( 1) 00:16:00.597 10.065 - 10.124: 99.7676% ( 1) 00:16:00.597 10.124 - 10.182: 99.7821% ( 1) 00:16:00.597 10.240 - 10.298: 99.7966% ( 1) 00:16:00.597 10.298 - 10.356: 99.8112% ( 1) 00:16:00.597 10.531 - 10.589: 99.8257% ( 1) 00:16:00.597 10.996 - 11.055: 99.8402% ( 1) 00:16:00.597 11.578 - 11.636: 99.8547% ( 1) 00:16:00.597 11.636 - 11.695: 99.8693% ( 1) 00:16:00.597 3991.738 - 4021.527: 100.0000% ( 9) 00:16:00.597 00:16:00.597 Complete histogram 00:16:00.597 ================== 00:16:00.597 Range in us Cumulative Count 00:16:00.597 2.705 - 2.720: 0.0291% ( 2) 00:16:00.597 2.720 - 2.735: 0.9878% ( 66) 00:16:00.597 2.735 - 2.749: 12.6525% ( 803) 00:16:00.597 2.749 - 2.764: 31.2028% ( 1277) 00:16:00.597 2.764 - 2.778: 40.2092% ( 620) 00:16:00.597 2.778 - 2.793: 43.2162% ( 207) 00:16:00.597 2.793 - 2.807: 53.2394% ( 690) 00:16:00.597 2.807 - 2.822: 75.7263% ( 1548) 00:16:00.597 2.822 - 2.836: 88.9454% ( 910) 00:16:00.597 2.836 - 2.851: 92.8966% ( 272) 00:16:00.597 2.851 - 2.865: 95.1046% ( 152) 00:16:00.597 2.865 - 2.880: 96.4701% ( 94) 00:16:00.597 2.880 - 2.895: 97.1528% ( 47) 00:16:00.597 2.895 - 2.909: 97.6467% ( 34) 00:16:00.597 2.909 - 2.924: 98.2132% ( 39) 00:16:00.597 2.924 - 2.938: 98.4457% ( 16) 00:16:00.597 2.938 - 2.953: 98.5619% ( 8) 00:16:00.597 2.953 - 2.967: 98.6345% ( 5) 00:16:00.597 2.967 - 2.982: 98.6490% ( 1) 00:16:00.597 3.535 - 3.549: 98.6636% ( 1) 00:16:00.597 5.702 - 5.731: 98.6781% ( 1) 00:16:00.597 5.876 - 5.905: 98.6926% ( 1) 00:16:00.597 6.138 - 6.167: 98.7071% ( 1) 00:16:00.597 6.196 - 6.225: 98.7217% ( 1) 00:16:00.597 6.778 - 6.807: 98.7362% ( 1) 00:16:00.597 7.040 - 7.069: 98.7507% ( 1) 00:16:00.597 7.389 - 7.418: 98.7653% ( 1) 00:16:00.597 7.505 - [2024-07-24 18:52:45.190533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:00.597 7.564: 98.7798% ( 1) 00:16:00.597 7.680 - 7.738: 98.7943% ( 1) 00:16:00.597 7.738 - 7.796: 98.8088% ( 1) 00:16:00.597 7.855 - 7.913: 98.8379% ( 2) 00:16:00.597 8.029 - 8.087: 98.8524% ( 1) 00:16:00.597 8.145 - 8.204: 98.8669% ( 1) 00:16:00.597 8.378 - 8.436: 98.8960% ( 2) 00:16:00.597 9.018 - 9.076: 98.9105% ( 1) 00:16:00.597 9.251 - 9.309: 98.9250% ( 1) 00:16:00.597 10.007 - 10.065: 98.9396% ( 1) 00:16:00.597 3842.793 - 3872.582: 98.9541% ( 1) 00:16:00.597 3991.738 - 4021.527: 100.0000% ( 72) 00:16:00.597 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:00.597 [ 00:16:00.597 { 00:16:00.597 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:00.597 "subtype": "Discovery", 00:16:00.597 "listen_addresses": [], 00:16:00.597 "allow_any_host": true, 00:16:00.597 "hosts": [] 00:16:00.597 }, 00:16:00.597 { 00:16:00.597 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:00.597 "subtype": "NVMe", 00:16:00.597 "listen_addresses": [ 00:16:00.597 { 00:16:00.597 "trtype": "VFIOUSER", 00:16:00.597 "adrfam": "IPv4", 00:16:00.597 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:00.597 "trsvcid": "0" 00:16:00.597 } 00:16:00.597 ], 00:16:00.597 "allow_any_host": true, 00:16:00.597 "hosts": [], 00:16:00.597 "serial_number": "SPDK1", 00:16:00.597 "model_number": "SPDK bdev Controller", 00:16:00.597 "max_namespaces": 32, 00:16:00.597 "min_cntlid": 1, 00:16:00.597 "max_cntlid": 65519, 00:16:00.597 "namespaces": [ 00:16:00.597 { 00:16:00.597 "nsid": 1, 00:16:00.597 "bdev_name": "Malloc1", 00:16:00.597 "name": "Malloc1", 00:16:00.597 "nguid": "DE35D71EE14A409798FFF4475E19FB15", 00:16:00.597 "uuid": "de35d71e-e14a-4097-98ff-f4475e19fb15" 00:16:00.597 } 00:16:00.597 ] 00:16:00.597 }, 00:16:00.597 { 00:16:00.597 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:00.597 "subtype": "NVMe", 00:16:00.597 "listen_addresses": [ 00:16:00.597 { 00:16:00.597 "trtype": "VFIOUSER", 00:16:00.597 "adrfam": "IPv4", 00:16:00.597 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:00.597 "trsvcid": "0" 00:16:00.597 } 00:16:00.597 ], 00:16:00.597 "allow_any_host": true, 00:16:00.597 "hosts": [], 00:16:00.597 "serial_number": "SPDK2", 00:16:00.597 "model_number": "SPDK bdev Controller", 00:16:00.597 "max_namespaces": 32, 00:16:00.597 "min_cntlid": 1, 00:16:00.597 "max_cntlid": 65519, 00:16:00.597 "namespaces": [ 00:16:00.597 { 00:16:00.597 "nsid": 1, 00:16:00.597 "bdev_name": "Malloc2", 00:16:00.597 "name": "Malloc2", 00:16:00.597 "nguid": "C8A9C988110246E38AF0266F1FE3047C", 00:16:00.597 "uuid": "c8a9c988-1102-46e3-8af0-266f1fe3047c" 00:16:00.597 } 00:16:00.597 ] 00:16:00.597 } 00:16:00.597 ] 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2479808 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:00.597 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:00.597 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.856 [2024-07-24 18:52:45.687347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:00.856 Malloc3 00:16:00.856 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:01.113 [2024-07-24 18:52:46.026497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.113 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:01.113 Asynchronous Event Request test 00:16:01.113 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.113 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.113 Registering asynchronous event callbacks... 00:16:01.113 Starting namespace attribute notice tests for all controllers... 00:16:01.113 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:01.113 aer_cb - Changed Namespace 00:16:01.113 Cleaning up... 00:16:01.372 [ 00:16:01.372 { 00:16:01.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:01.372 "subtype": "Discovery", 00:16:01.372 "listen_addresses": [], 00:16:01.372 "allow_any_host": true, 00:16:01.372 "hosts": [] 00:16:01.372 }, 00:16:01.372 { 00:16:01.372 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:01.372 "subtype": "NVMe", 00:16:01.372 "listen_addresses": [ 00:16:01.372 { 00:16:01.372 "trtype": "VFIOUSER", 00:16:01.372 "adrfam": "IPv4", 00:16:01.372 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:01.372 "trsvcid": "0" 00:16:01.372 } 00:16:01.372 ], 00:16:01.372 "allow_any_host": true, 00:16:01.372 "hosts": [], 00:16:01.372 "serial_number": "SPDK1", 00:16:01.372 "model_number": "SPDK bdev Controller", 00:16:01.372 "max_namespaces": 32, 00:16:01.372 "min_cntlid": 1, 00:16:01.372 "max_cntlid": 65519, 00:16:01.372 "namespaces": [ 00:16:01.372 { 00:16:01.372 "nsid": 1, 00:16:01.372 "bdev_name": "Malloc1", 00:16:01.372 "name": "Malloc1", 00:16:01.372 "nguid": "DE35D71EE14A409798FFF4475E19FB15", 00:16:01.372 "uuid": "de35d71e-e14a-4097-98ff-f4475e19fb15" 00:16:01.372 }, 00:16:01.372 { 00:16:01.372 "nsid": 2, 00:16:01.372 "bdev_name": "Malloc3", 00:16:01.372 "name": "Malloc3", 00:16:01.372 "nguid": "AF6792E52CD74C60B3279E9C6DF94759", 00:16:01.372 "uuid": "af6792e5-2cd7-4c60-b327-9e9c6df94759" 00:16:01.372 } 00:16:01.372 ] 00:16:01.372 }, 00:16:01.372 { 00:16:01.372 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:01.372 "subtype": "NVMe", 00:16:01.372 "listen_addresses": [ 00:16:01.372 { 00:16:01.372 "trtype": "VFIOUSER", 00:16:01.372 "adrfam": "IPv4", 00:16:01.372 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:01.372 "trsvcid": "0" 00:16:01.372 } 00:16:01.372 ], 00:16:01.372 "allow_any_host": true, 00:16:01.372 "hosts": [], 00:16:01.372 "serial_number": "SPDK2", 00:16:01.372 "model_number": "SPDK bdev Controller", 00:16:01.372 "max_namespaces": 32, 00:16:01.372 "min_cntlid": 1, 00:16:01.372 "max_cntlid": 65519, 00:16:01.372 "namespaces": [ 00:16:01.372 { 00:16:01.372 "nsid": 1, 00:16:01.372 "bdev_name": "Malloc2", 00:16:01.372 "name": "Malloc2", 00:16:01.372 "nguid": "C8A9C988110246E38AF0266F1FE3047C", 00:16:01.372 "uuid": "c8a9c988-1102-46e3-8af0-266f1fe3047c" 00:16:01.372 } 00:16:01.372 ] 00:16:01.372 } 00:16:01.372 ] 00:16:01.372 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2479808 00:16:01.372 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:01.372 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:01.372 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:01.372 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:01.372 [2024-07-24 18:52:46.330599] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:16:01.372 [2024-07-24 18:52:46.330748] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480066 ] 00:16:01.372 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.372 [2024-07-24 18:52:46.368973] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:01.372 [2024-07-24 18:52:46.371278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:01.372 [2024-07-24 18:52:46.371304] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0b40f2a000 00:16:01.372 [2024-07-24 18:52:46.372290] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.372 [2024-07-24 18:52:46.373292] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.372 [2024-07-24 18:52:46.374305] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.372 [2024-07-24 18:52:46.375317] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.372 [2024-07-24 18:52:46.376323] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.372 [2024-07-24 18:52:46.377329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.372 [2024-07-24 18:52:46.378342] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.372 [2024-07-24 18:52:46.379350] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.372 [2024-07-24 18:52:46.380375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:01.372 [2024-07-24 18:52:46.380388] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0b40f1f000 00:16:01.632 [2024-07-24 18:52:46.381799] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:01.632 [2024-07-24 18:52:46.397559] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:01.632 [2024-07-24 18:52:46.397596] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:01.632 [2024-07-24 18:52:46.402711] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:01.632 [2024-07-24 18:52:46.402763] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:01.632 [2024-07-24 18:52:46.402866] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:01.632 [2024-07-24 18:52:46.402885] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:01.632 [2024-07-24 18:52:46.402893] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:01.632 [2024-07-24 18:52:46.403714] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:01.633 [2024-07-24 18:52:46.403731] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:01.633 [2024-07-24 18:52:46.403740] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:01.633 [2024-07-24 18:52:46.404734] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:01.633 [2024-07-24 18:52:46.404747] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:01.633 [2024-07-24 18:52:46.404757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:01.633 [2024-07-24 18:52:46.405744] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:01.633 [2024-07-24 18:52:46.405757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:01.633 [2024-07-24 18:52:46.406758] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:01.633 [2024-07-24 18:52:46.406770] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:01.633 [2024-07-24 18:52:46.406777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:01.633 [2024-07-24 18:52:46.406785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:01.633 [2024-07-24 18:52:46.406893] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:01.633 [2024-07-24 18:52:46.406899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:01.633 [2024-07-24 18:52:46.406906] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:01.633 [2024-07-24 18:52:46.407770] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:01.633 [2024-07-24 18:52:46.408781] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:01.633 [2024-07-24 18:52:46.409784] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:01.633 [2024-07-24 18:52:46.410797] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:01.633 [2024-07-24 18:52:46.410850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:01.633 [2024-07-24 18:52:46.411809] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:01.633 [2024-07-24 18:52:46.411823] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:01.633 [2024-07-24 18:52:46.411829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.411855] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:01.633 [2024-07-24 18:52:46.411864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.411880] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.633 [2024-07-24 18:52:46.411887] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.633 [2024-07-24 18:52:46.411892] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.633 [2024-07-24 18:52:46.411906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.633 [2024-07-24 18:52:46.420614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:01.633 [2024-07-24 18:52:46.420630] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:01.633 [2024-07-24 18:52:46.420636] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:01.633 [2024-07-24 18:52:46.420642] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:01.633 [2024-07-24 18:52:46.420648] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:01.633 [2024-07-24 18:52:46.420654] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:01.633 [2024-07-24 18:52:46.420660] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:01.633 [2024-07-24 18:52:46.420666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.420675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.420692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:01.633 [2024-07-24 18:52:46.428609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:01.633 [2024-07-24 18:52:46.428630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.633 [2024-07-24 18:52:46.428641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.633 [2024-07-24 18:52:46.428651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.633 [2024-07-24 18:52:46.428661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.633 [2024-07-24 18:52:46.428667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.428680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.428693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:01.633 [2024-07-24 18:52:46.436611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:01.633 [2024-07-24 18:52:46.436622] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:01.633 [2024-07-24 18:52:46.436629] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.436640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.436648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.436660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:01.633 [2024-07-24 18:52:46.444613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:01.633 [2024-07-24 18:52:46.444692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.444703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.444713] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:01.633 [2024-07-24 18:52:46.444719] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:01.633 [2024-07-24 18:52:46.444724] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.633 [2024-07-24 18:52:46.444732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:01.633 [2024-07-24 18:52:46.452609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:01.633 [2024-07-24 18:52:46.452624] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:01.633 [2024-07-24 18:52:46.452639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.452649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.452658] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.633 [2024-07-24 18:52:46.452664] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.633 [2024-07-24 18:52:46.452669] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.633 [2024-07-24 18:52:46.452676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.633 [2024-07-24 18:52:46.460611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:01.633 [2024-07-24 18:52:46.460632] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.460645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.460655] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.633 [2024-07-24 18:52:46.460661] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.633 [2024-07-24 18:52:46.460666] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.633 [2024-07-24 18:52:46.460674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.633 [2024-07-24 18:52:46.468611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:01.633 [2024-07-24 18:52:46.468624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.468632] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.468642] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:01.633 [2024-07-24 18:52:46.468652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:01.634 [2024-07-24 18:52:46.468659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:01.634 [2024-07-24 18:52:46.468665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:01.634 [2024-07-24 18:52:46.468671] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:01.634 [2024-07-24 18:52:46.468677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:01.634 [2024-07-24 18:52:46.468684] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:01.634 [2024-07-24 18:52:46.468705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:01.634 [2024-07-24 18:52:46.476611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:01.634 [2024-07-24 18:52:46.476629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:01.634 [2024-07-24 18:52:46.484612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:01.634 [2024-07-24 18:52:46.484630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:01.634 [2024-07-24 18:52:46.492612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:01.634 [2024-07-24 18:52:46.492630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:01.634 [2024-07-24 18:52:46.500609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:01.634 [2024-07-24 18:52:46.500632] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:01.634 [2024-07-24 18:52:46.500639] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:01.634 [2024-07-24 18:52:46.500643] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:01.634 [2024-07-24 18:52:46.500650] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:01.634 [2024-07-24 18:52:46.500655] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:01.634 [2024-07-24 18:52:46.500663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:01.634 [2024-07-24 18:52:46.500673] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:01.634 [2024-07-24 18:52:46.500678] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:01.634 [2024-07-24 18:52:46.500683] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.634 [2024-07-24 18:52:46.500690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:01.634 [2024-07-24 18:52:46.500699] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:01.634 [2024-07-24 18:52:46.500705] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.634 [2024-07-24 18:52:46.500709] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.634 [2024-07-24 18:52:46.500716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.634 [2024-07-24 18:52:46.500726] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:01.634 [2024-07-24 18:52:46.500731] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:01.634 [2024-07-24 18:52:46.500736] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:01.634 [2024-07-24 18:52:46.500743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:01.634 [2024-07-24 18:52:46.508612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:01.634 [2024-07-24 18:52:46.508631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:01.634 [2024-07-24 18:52:46.508645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:01.634 [2024-07-24 18:52:46.508654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:01.634 ===================================================== 00:16:01.634 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:01.634 ===================================================== 00:16:01.634 Controller Capabilities/Features 00:16:01.634 ================================ 00:16:01.634 Vendor ID: 4e58 00:16:01.634 Subsystem Vendor ID: 4e58 00:16:01.634 Serial Number: SPDK2 00:16:01.634 Model Number: SPDK bdev Controller 00:16:01.634 Firmware Version: 24.09 00:16:01.634 Recommended Arb Burst: 6 00:16:01.634 IEEE OUI Identifier: 8d 6b 50 00:16:01.634 Multi-path I/O 00:16:01.634 May have multiple subsystem ports: Yes 00:16:01.634 May have multiple controllers: Yes 00:16:01.634 Associated with SR-IOV VF: No 00:16:01.634 Max Data Transfer Size: 131072 00:16:01.634 Max Number of Namespaces: 32 00:16:01.634 Max Number of I/O Queues: 127 00:16:01.634 NVMe Specification Version (VS): 1.3 00:16:01.634 NVMe Specification Version (Identify): 1.3 00:16:01.634 Maximum Queue Entries: 256 00:16:01.634 Contiguous Queues Required: Yes 00:16:01.634 Arbitration Mechanisms Supported 00:16:01.634 Weighted Round Robin: Not Supported 00:16:01.634 Vendor Specific: Not Supported 00:16:01.634 Reset Timeout: 15000 ms 00:16:01.634 Doorbell Stride: 4 bytes 00:16:01.634 NVM Subsystem Reset: Not Supported 00:16:01.634 Command Sets Supported 00:16:01.634 NVM Command Set: Supported 00:16:01.634 Boot Partition: Not Supported 00:16:01.634 Memory Page Size Minimum: 4096 bytes 00:16:01.634 Memory Page Size Maximum: 4096 bytes 00:16:01.634 Persistent Memory Region: Not Supported 00:16:01.634 Optional Asynchronous Events Supported 00:16:01.634 Namespace Attribute Notices: Supported 00:16:01.634 Firmware Activation Notices: Not Supported 00:16:01.634 ANA Change Notices: Not Supported 00:16:01.634 PLE Aggregate Log Change Notices: Not Supported 00:16:01.634 LBA Status Info Alert Notices: Not Supported 00:16:01.634 EGE Aggregate Log Change Notices: Not Supported 00:16:01.634 Normal NVM Subsystem Shutdown event: Not Supported 00:16:01.634 Zone Descriptor Change Notices: Not Supported 00:16:01.634 Discovery Log Change Notices: Not Supported 00:16:01.634 Controller Attributes 00:16:01.634 128-bit Host Identifier: Supported 00:16:01.634 Non-Operational Permissive Mode: Not Supported 00:16:01.634 NVM Sets: Not Supported 00:16:01.634 Read Recovery Levels: Not Supported 00:16:01.634 Endurance Groups: Not Supported 00:16:01.634 Predictable Latency Mode: Not Supported 00:16:01.634 Traffic Based Keep ALive: Not Supported 00:16:01.634 Namespace Granularity: Not Supported 00:16:01.634 SQ Associations: Not Supported 00:16:01.634 UUID List: Not Supported 00:16:01.634 Multi-Domain Subsystem: Not Supported 00:16:01.634 Fixed Capacity Management: Not Supported 00:16:01.634 Variable Capacity Management: Not Supported 00:16:01.634 Delete Endurance Group: Not Supported 00:16:01.634 Delete NVM Set: Not Supported 00:16:01.634 Extended LBA Formats Supported: Not Supported 00:16:01.634 Flexible Data Placement Supported: Not Supported 00:16:01.634 00:16:01.634 Controller Memory Buffer Support 00:16:01.634 ================================ 00:16:01.634 Supported: No 00:16:01.634 00:16:01.634 Persistent Memory Region Support 00:16:01.634 ================================ 00:16:01.634 Supported: No 00:16:01.634 00:16:01.634 Admin Command Set Attributes 00:16:01.634 ============================ 00:16:01.634 Security Send/Receive: Not Supported 00:16:01.634 Format NVM: Not Supported 00:16:01.634 Firmware Activate/Download: Not Supported 00:16:01.634 Namespace Management: Not Supported 00:16:01.634 Device Self-Test: Not Supported 00:16:01.634 Directives: Not Supported 00:16:01.634 NVMe-MI: Not Supported 00:16:01.634 Virtualization Management: Not Supported 00:16:01.634 Doorbell Buffer Config: Not Supported 00:16:01.634 Get LBA Status Capability: Not Supported 00:16:01.634 Command & Feature Lockdown Capability: Not Supported 00:16:01.634 Abort Command Limit: 4 00:16:01.634 Async Event Request Limit: 4 00:16:01.634 Number of Firmware Slots: N/A 00:16:01.634 Firmware Slot 1 Read-Only: N/A 00:16:01.634 Firmware Activation Without Reset: N/A 00:16:01.634 Multiple Update Detection Support: N/A 00:16:01.634 Firmware Update Granularity: No Information Provided 00:16:01.634 Per-Namespace SMART Log: No 00:16:01.634 Asymmetric Namespace Access Log Page: Not Supported 00:16:01.634 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:01.634 Command Effects Log Page: Supported 00:16:01.634 Get Log Page Extended Data: Supported 00:16:01.634 Telemetry Log Pages: Not Supported 00:16:01.634 Persistent Event Log Pages: Not Supported 00:16:01.634 Supported Log Pages Log Page: May Support 00:16:01.634 Commands Supported & Effects Log Page: Not Supported 00:16:01.634 Feature Identifiers & Effects Log Page:May Support 00:16:01.634 NVMe-MI Commands & Effects Log Page: May Support 00:16:01.634 Data Area 4 for Telemetry Log: Not Supported 00:16:01.634 Error Log Page Entries Supported: 128 00:16:01.634 Keep Alive: Supported 00:16:01.634 Keep Alive Granularity: 10000 ms 00:16:01.634 00:16:01.634 NVM Command Set Attributes 00:16:01.634 ========================== 00:16:01.635 Submission Queue Entry Size 00:16:01.635 Max: 64 00:16:01.635 Min: 64 00:16:01.635 Completion Queue Entry Size 00:16:01.635 Max: 16 00:16:01.635 Min: 16 00:16:01.635 Number of Namespaces: 32 00:16:01.635 Compare Command: Supported 00:16:01.635 Write Uncorrectable Command: Not Supported 00:16:01.635 Dataset Management Command: Supported 00:16:01.635 Write Zeroes Command: Supported 00:16:01.635 Set Features Save Field: Not Supported 00:16:01.635 Reservations: Not Supported 00:16:01.635 Timestamp: Not Supported 00:16:01.635 Copy: Supported 00:16:01.635 Volatile Write Cache: Present 00:16:01.635 Atomic Write Unit (Normal): 1 00:16:01.635 Atomic Write Unit (PFail): 1 00:16:01.635 Atomic Compare & Write Unit: 1 00:16:01.635 Fused Compare & Write: Supported 00:16:01.635 Scatter-Gather List 00:16:01.635 SGL Command Set: Supported (Dword aligned) 00:16:01.635 SGL Keyed: Not Supported 00:16:01.635 SGL Bit Bucket Descriptor: Not Supported 00:16:01.635 SGL Metadata Pointer: Not Supported 00:16:01.635 Oversized SGL: Not Supported 00:16:01.635 SGL Metadata Address: Not Supported 00:16:01.635 SGL Offset: Not Supported 00:16:01.635 Transport SGL Data Block: Not Supported 00:16:01.635 Replay Protected Memory Block: Not Supported 00:16:01.635 00:16:01.635 Firmware Slot Information 00:16:01.635 ========================= 00:16:01.635 Active slot: 1 00:16:01.635 Slot 1 Firmware Revision: 24.09 00:16:01.635 00:16:01.635 00:16:01.635 Commands Supported and Effects 00:16:01.635 ============================== 00:16:01.635 Admin Commands 00:16:01.635 -------------- 00:16:01.635 Get Log Page (02h): Supported 00:16:01.635 Identify (06h): Supported 00:16:01.635 Abort (08h): Supported 00:16:01.635 Set Features (09h): Supported 00:16:01.635 Get Features (0Ah): Supported 00:16:01.635 Asynchronous Event Request (0Ch): Supported 00:16:01.635 Keep Alive (18h): Supported 00:16:01.635 I/O Commands 00:16:01.635 ------------ 00:16:01.635 Flush (00h): Supported LBA-Change 00:16:01.635 Write (01h): Supported LBA-Change 00:16:01.635 Read (02h): Supported 00:16:01.635 Compare (05h): Supported 00:16:01.635 Write Zeroes (08h): Supported LBA-Change 00:16:01.635 Dataset Management (09h): Supported LBA-Change 00:16:01.635 Copy (19h): Supported LBA-Change 00:16:01.635 00:16:01.635 Error Log 00:16:01.635 ========= 00:16:01.635 00:16:01.635 Arbitration 00:16:01.635 =========== 00:16:01.635 Arbitration Burst: 1 00:16:01.635 00:16:01.635 Power Management 00:16:01.635 ================ 00:16:01.635 Number of Power States: 1 00:16:01.635 Current Power State: Power State #0 00:16:01.635 Power State #0: 00:16:01.635 Max Power: 0.00 W 00:16:01.635 Non-Operational State: Operational 00:16:01.635 Entry Latency: Not Reported 00:16:01.635 Exit Latency: Not Reported 00:16:01.635 Relative Read Throughput: 0 00:16:01.635 Relative Read Latency: 0 00:16:01.635 Relative Write Throughput: 0 00:16:01.635 Relative Write Latency: 0 00:16:01.635 Idle Power: Not Reported 00:16:01.635 Active Power: Not Reported 00:16:01.635 Non-Operational Permissive Mode: Not Supported 00:16:01.635 00:16:01.635 Health Information 00:16:01.635 ================== 00:16:01.635 Critical Warnings: 00:16:01.635 Available Spare Space: OK 00:16:01.635 Temperature: OK 00:16:01.635 Device Reliability: OK 00:16:01.635 Read Only: No 00:16:01.635 Volatile Memory Backup: OK 00:16:01.635 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:01.635 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:01.635 Available Spare: 0% 00:16:01.635 Available Sp[2024-07-24 18:52:46.508780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:01.635 [2024-07-24 18:52:46.516612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:01.635 [2024-07-24 18:52:46.516655] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:01.635 [2024-07-24 18:52:46.516668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.635 [2024-07-24 18:52:46.516676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.635 [2024-07-24 18:52:46.516684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.635 [2024-07-24 18:52:46.516692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.635 [2024-07-24 18:52:46.516768] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:01.635 [2024-07-24 18:52:46.516782] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:01.635 [2024-07-24 18:52:46.517763] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:01.635 [2024-07-24 18:52:46.517823] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:01.635 [2024-07-24 18:52:46.517831] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:01.635 [2024-07-24 18:52:46.518771] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:01.635 [2024-07-24 18:52:46.518787] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:01.635 [2024-07-24 18:52:46.518843] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:01.635 [2024-07-24 18:52:46.520306] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:01.635 are Threshold: 0% 00:16:01.635 Life Percentage Used: 0% 00:16:01.635 Data Units Read: 0 00:16:01.635 Data Units Written: 0 00:16:01.635 Host Read Commands: 0 00:16:01.635 Host Write Commands: 0 00:16:01.635 Controller Busy Time: 0 minutes 00:16:01.635 Power Cycles: 0 00:16:01.635 Power On Hours: 0 hours 00:16:01.635 Unsafe Shutdowns: 0 00:16:01.635 Unrecoverable Media Errors: 0 00:16:01.635 Lifetime Error Log Entries: 0 00:16:01.635 Warning Temperature Time: 0 minutes 00:16:01.635 Critical Temperature Time: 0 minutes 00:16:01.635 00:16:01.635 Number of Queues 00:16:01.635 ================ 00:16:01.635 Number of I/O Submission Queues: 127 00:16:01.635 Number of I/O Completion Queues: 127 00:16:01.635 00:16:01.635 Active Namespaces 00:16:01.635 ================= 00:16:01.635 Namespace ID:1 00:16:01.635 Error Recovery Timeout: Unlimited 00:16:01.635 Command Set Identifier: NVM (00h) 00:16:01.635 Deallocate: Supported 00:16:01.635 Deallocated/Unwritten Error: Not Supported 00:16:01.635 Deallocated Read Value: Unknown 00:16:01.635 Deallocate in Write Zeroes: Not Supported 00:16:01.635 Deallocated Guard Field: 0xFFFF 00:16:01.635 Flush: Supported 00:16:01.635 Reservation: Supported 00:16:01.635 Namespace Sharing Capabilities: Multiple Controllers 00:16:01.635 Size (in LBAs): 131072 (0GiB) 00:16:01.635 Capacity (in LBAs): 131072 (0GiB) 00:16:01.635 Utilization (in LBAs): 131072 (0GiB) 00:16:01.635 NGUID: C8A9C988110246E38AF0266F1FE3047C 00:16:01.635 UUID: c8a9c988-1102-46e3-8af0-266f1fe3047c 00:16:01.635 Thin Provisioning: Not Supported 00:16:01.635 Per-NS Atomic Units: Yes 00:16:01.635 Atomic Boundary Size (Normal): 0 00:16:01.635 Atomic Boundary Size (PFail): 0 00:16:01.635 Atomic Boundary Offset: 0 00:16:01.635 Maximum Single Source Range Length: 65535 00:16:01.635 Maximum Copy Length: 65535 00:16:01.635 Maximum Source Range Count: 1 00:16:01.635 NGUID/EUI64 Never Reused: No 00:16:01.635 Namespace Write Protected: No 00:16:01.635 Number of LBA Formats: 1 00:16:01.635 Current LBA Format: LBA Format #00 00:16:01.635 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:01.635 00:16:01.635 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:01.635 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.894 [2024-07-24 18:52:46.780317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:07.165 Initializing NVMe Controllers 00:16:07.165 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:07.165 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:07.165 Initialization complete. Launching workers. 00:16:07.165 ======================================================== 00:16:07.165 Latency(us) 00:16:07.165 Device Information : IOPS MiB/s Average min max 00:16:07.165 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 18632.14 72.78 6870.64 2701.21 13951.07 00:16:07.165 ======================================================== 00:16:07.165 Total : 18632.14 72.78 6870.64 2701.21 13951.07 00:16:07.165 00:16:07.165 [2024-07-24 18:52:51.884948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:07.165 18:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:07.165 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.165 [2024-07-24 18:52:52.168235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.452 Initializing NVMe Controllers 00:16:12.452 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:12.452 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:12.452 Initialization complete. Launching workers. 00:16:12.452 ======================================================== 00:16:12.452 Latency(us) 00:16:12.452 Device Information : IOPS MiB/s Average min max 00:16:12.452 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24210.40 94.57 5288.27 1549.96 7992.33 00:16:12.452 ======================================================== 00:16:12.452 Total : 24210.40 94.57 5288.27 1549.96 7992.33 00:16:12.452 00:16:12.452 [2024-07-24 18:52:57.191345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.452 18:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:12.452 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.711 [2024-07-24 18:52:57.477601] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.056 [2024-07-24 18:53:02.615742] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.056 Initializing NVMe Controllers 00:16:18.056 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:18.056 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:18.056 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:18.056 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:18.056 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:18.056 Initialization complete. Launching workers. 00:16:18.056 Starting thread on core 2 00:16:18.056 Starting thread on core 3 00:16:18.056 Starting thread on core 1 00:16:18.056 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:18.056 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.056 [2024-07-24 18:53:02.975254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:21.346 [2024-07-24 18:53:06.048954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:21.346 Initializing NVMe Controllers 00:16:21.346 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.346 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.346 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:21.346 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:21.346 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:21.346 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:21.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:21.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:21.346 Initialization complete. Launching workers. 00:16:21.346 Starting thread on core 1 with urgent priority queue 00:16:21.346 Starting thread on core 2 with urgent priority queue 00:16:21.346 Starting thread on core 3 with urgent priority queue 00:16:21.346 Starting thread on core 0 with urgent priority queue 00:16:21.346 SPDK bdev Controller (SPDK2 ) core 0: 6408.00 IO/s 15.61 secs/100000 ios 00:16:21.346 SPDK bdev Controller (SPDK2 ) core 1: 4752.00 IO/s 21.04 secs/100000 ios 00:16:21.346 SPDK bdev Controller (SPDK2 ) core 2: 4941.00 IO/s 20.24 secs/100000 ios 00:16:21.346 SPDK bdev Controller (SPDK2 ) core 3: 5502.00 IO/s 18.18 secs/100000 ios 00:16:21.346 ======================================================== 00:16:21.346 00:16:21.346 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:21.346 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.605 [2024-07-24 18:53:06.382466] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:21.605 Initializing NVMe Controllers 00:16:21.605 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.605 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.605 Namespace ID: 1 size: 0GB 00:16:21.605 Initialization complete. 00:16:21.605 INFO: using host memory buffer for IO 00:16:21.605 Hello world! 00:16:21.605 [2024-07-24 18:53:06.392210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:21.605 18:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:21.605 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.864 [2024-07-24 18:53:06.710815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:22.800 Initializing NVMe Controllers 00:16:22.800 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:22.800 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:22.800 Initialization complete. Launching workers. 00:16:22.800 submit (in ns) avg, min, max = 10565.3, 4540.9, 4004059.1 00:16:22.800 complete (in ns) avg, min, max = 34521.5, 2704.5, 4998950.9 00:16:22.800 00:16:22.800 Submit histogram 00:16:22.801 ================ 00:16:22.801 Range in us Cumulative Count 00:16:22.801 4.538 - 4.567: 0.1568% ( 15) 00:16:22.801 4.567 - 4.596: 1.5475% ( 133) 00:16:22.801 4.596 - 4.625: 3.8164% ( 217) 00:16:22.801 4.625 - 4.655: 6.6813% ( 274) 00:16:22.801 4.655 - 4.684: 13.2058% ( 624) 00:16:22.801 4.684 - 4.713: 24.3622% ( 1067) 00:16:22.801 4.713 - 4.742: 35.3513% ( 1051) 00:16:22.801 4.742 - 4.771: 48.5153% ( 1259) 00:16:22.801 4.771 - 4.800: 60.0063% ( 1099) 00:16:22.801 4.800 - 4.829: 70.0125% ( 957) 00:16:22.801 4.829 - 4.858: 77.9067% ( 755) 00:16:22.801 4.858 - 4.887: 83.3542% ( 521) 00:16:22.801 4.887 - 4.916: 86.1041% ( 263) 00:16:22.801 4.916 - 4.945: 87.6202% ( 145) 00:16:22.801 4.945 - 4.975: 89.2723% ( 158) 00:16:22.801 4.975 - 5.004: 90.8093% ( 147) 00:16:22.801 5.004 - 5.033: 92.7018% ( 181) 00:16:22.801 5.033 - 5.062: 94.5211% ( 174) 00:16:22.801 5.062 - 5.091: 95.9222% ( 134) 00:16:22.801 5.091 - 5.120: 97.1978% ( 122) 00:16:22.801 5.120 - 5.149: 98.2016% ( 96) 00:16:22.801 5.149 - 5.178: 98.6512% ( 43) 00:16:22.801 5.178 - 5.207: 99.0694% ( 40) 00:16:22.801 5.207 - 5.236: 99.2263% ( 15) 00:16:22.801 5.236 - 5.265: 99.3413% ( 11) 00:16:22.801 5.265 - 5.295: 99.3517% ( 1) 00:16:22.801 5.295 - 5.324: 99.3726% ( 2) 00:16:22.801 5.324 - 5.353: 99.3936% ( 2) 00:16:22.801 5.382 - 5.411: 99.4040% ( 1) 00:16:22.801 5.411 - 5.440: 99.4145% ( 1) 00:16:22.801 7.622 - 7.680: 99.4249% ( 1) 00:16:22.801 7.738 - 7.796: 99.4354% ( 1) 00:16:22.801 7.796 - 7.855: 99.4668% ( 3) 00:16:22.801 7.855 - 7.913: 99.4772% ( 1) 00:16:22.801 7.913 - 7.971: 99.5086% ( 3) 00:16:22.801 8.145 - 8.204: 99.5190% ( 1) 00:16:22.801 8.378 - 8.436: 99.5295% ( 1) 00:16:22.801 8.495 - 8.553: 99.5399% ( 1) 00:16:22.801 8.553 - 8.611: 99.5713% ( 3) 00:16:22.801 8.669 - 8.727: 99.5818% ( 1) 00:16:22.801 8.727 - 8.785: 99.6027% ( 2) 00:16:22.801 8.844 - 8.902: 99.6131% ( 1) 00:16:22.801 8.902 - 8.960: 99.6340% ( 2) 00:16:22.801 9.018 - 9.076: 99.6550% ( 2) 00:16:22.801 9.251 - 9.309: 99.6654% ( 1) 00:16:22.801 9.309 - 9.367: 99.6863% ( 2) 00:16:22.801 9.600 - 9.658: 99.6968% ( 1) 00:16:22.801 9.716 - 9.775: 99.7072% ( 1) 00:16:22.801 9.775 - 9.833: 99.7386% ( 3) 00:16:22.801 9.891 - 9.949: 99.7491% ( 1) 00:16:22.801 10.007 - 10.065: 99.7700% ( 2) 00:16:22.801 10.065 - 10.124: 99.7909% ( 2) 00:16:22.801 10.124 - 10.182: 99.8013% ( 1) 00:16:22.801 10.182 - 10.240: 99.8118% ( 1) 00:16:22.801 10.705 - 10.764: 99.8223% ( 1) 00:16:22.801 11.055 - 11.113: 99.8327% ( 1) 00:16:22.801 11.171 - 11.229: 99.8432% ( 1) 00:16:22.801 16.291 - 16.407: 99.8536% ( 1) 00:16:22.801 3053.382 - 3068.276: 99.8641% ( 1) 00:16:22.801 3991.738 - 4021.527: 100.0000% ( 13) 00:16:22.801 00:16:22.801 Complete histogram 00:16:22.801 ================== 00:16:22.801 Range in us Cumulative Count 00:16:22.801 2.691 - 2.705: 0.0105% ( 1) 00:16:22.801 2.705 - 2.720: 0.9306% ( 88) 00:16:22.801 2.720 - 2.735: 14.5128% ( 1299) 00:16:22.801 2.735 - 2.749: 46.1418% ( 3025) 00:16:22.801 2.749 - 2.764: 65.5793% ( 1859) 00:16:22.801 2.764 - 2.778: 72.1769% ( 631) 00:16:22.801 2.778 - 2.793: 81.4826% ( 890) 00:16:22.801 2.793 - 2.807: 90.0146% ( 816) 00:16:22.801 2.807 - 2.822: 93.0468% ( 290) 00:16:22.801 2.822 - 2.836: 95.5249% ( 237) 00:16:22.801 2.836 - 2.851: 97.0201% ( 143) 00:16:22.801 2.851 - 2.865: 97.6161% ( 57) 00:16:22.801 2.865 - 2.880: 98.0657% ( 43) 00:16:22.801 2.880 - 2.895: 98.4944% ( 41) 00:16:22.801 2.895 - 2.909: 98.6198% ( 12) 00:16:22.801 2.909 - 2.924: 98.6616% ( 4) 00:16:22.801 2.924 - 2.938: 98.7348% ( 7) 00:16:23.060 2.938 - [2024-07-24 18:53:07.816103] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.060 2.953: 98.8394% ( 10) 00:16:23.060 2.953 - 2.967: 98.8708% ( 3) 00:16:23.060 2.967 - 2.982: 98.8917% ( 2) 00:16:23.060 2.982 - 2.996: 98.9230% ( 3) 00:16:23.060 2.996 - 3.011: 98.9544% ( 3) 00:16:23.060 3.025 - 3.040: 98.9649% ( 1) 00:16:23.060 3.040 - 3.055: 98.9753% ( 1) 00:16:23.060 3.055 - 3.069: 98.9962% ( 2) 00:16:23.060 5.353 - 5.382: 99.0067% ( 1) 00:16:23.060 5.731 - 5.760: 99.0171% ( 1) 00:16:23.060 6.196 - 6.225: 99.0276% ( 1) 00:16:23.060 6.400 - 6.429: 99.0381% ( 1) 00:16:23.060 6.458 - 6.487: 99.0485% ( 1) 00:16:23.060 6.778 - 6.807: 99.0590% ( 1) 00:16:23.060 7.215 - 7.244: 99.0694% ( 1) 00:16:23.060 7.302 - 7.331: 99.0799% ( 1) 00:16:23.060 7.331 - 7.360: 99.0903% ( 1) 00:16:23.060 7.418 - 7.447: 99.1008% ( 1) 00:16:23.060 7.447 - 7.505: 99.1113% ( 1) 00:16:23.060 7.971 - 8.029: 99.1322% ( 2) 00:16:23.060 8.844 - 8.902: 99.1426% ( 1) 00:16:23.060 9.018 - 9.076: 99.1531% ( 1) 00:16:23.060 9.076 - 9.135: 99.1635% ( 1) 00:16:23.060 9.135 - 9.193: 99.1740% ( 1) 00:16:23.060 54.225 - 54.458: 99.1844% ( 1) 00:16:23.060 203.869 - 204.800: 99.1949% ( 1) 00:16:23.060 1608.611 - 1616.058: 99.2054% ( 1) 00:16:23.060 1995.869 - 2010.764: 99.2158% ( 1) 00:16:23.060 3038.487 - 3053.382: 99.2263% ( 1) 00:16:23.061 3053.382 - 3068.276: 99.2367% ( 1) 00:16:23.061 3991.738 - 4021.527: 99.9791% ( 71) 00:16:23.061 4974.778 - 5004.567: 100.0000% ( 2) 00:16:23.061 00:16:23.061 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:23.061 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:23.061 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:23.061 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:23.061 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:23.320 [ 00:16:23.320 { 00:16:23.320 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:23.320 "subtype": "Discovery", 00:16:23.320 "listen_addresses": [], 00:16:23.320 "allow_any_host": true, 00:16:23.320 "hosts": [] 00:16:23.320 }, 00:16:23.320 { 00:16:23.320 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:23.320 "subtype": "NVMe", 00:16:23.320 "listen_addresses": [ 00:16:23.320 { 00:16:23.320 "trtype": "VFIOUSER", 00:16:23.320 "adrfam": "IPv4", 00:16:23.320 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:23.320 "trsvcid": "0" 00:16:23.320 } 00:16:23.320 ], 00:16:23.320 "allow_any_host": true, 00:16:23.320 "hosts": [], 00:16:23.320 "serial_number": "SPDK1", 00:16:23.320 "model_number": "SPDK bdev Controller", 00:16:23.320 "max_namespaces": 32, 00:16:23.320 "min_cntlid": 1, 00:16:23.320 "max_cntlid": 65519, 00:16:23.320 "namespaces": [ 00:16:23.320 { 00:16:23.320 "nsid": 1, 00:16:23.320 "bdev_name": "Malloc1", 00:16:23.320 "name": "Malloc1", 00:16:23.320 "nguid": "DE35D71EE14A409798FFF4475E19FB15", 00:16:23.320 "uuid": "de35d71e-e14a-4097-98ff-f4475e19fb15" 00:16:23.320 }, 00:16:23.320 { 00:16:23.320 "nsid": 2, 00:16:23.320 "bdev_name": "Malloc3", 00:16:23.320 "name": "Malloc3", 00:16:23.320 "nguid": "AF6792E52CD74C60B3279E9C6DF94759", 00:16:23.320 "uuid": "af6792e5-2cd7-4c60-b327-9e9c6df94759" 00:16:23.320 } 00:16:23.320 ] 00:16:23.320 }, 00:16:23.320 { 00:16:23.320 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:23.320 "subtype": "NVMe", 00:16:23.320 "listen_addresses": [ 00:16:23.320 { 00:16:23.320 "trtype": "VFIOUSER", 00:16:23.320 "adrfam": "IPv4", 00:16:23.320 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:23.320 "trsvcid": "0" 00:16:23.320 } 00:16:23.320 ], 00:16:23.320 "allow_any_host": true, 00:16:23.320 "hosts": [], 00:16:23.320 "serial_number": "SPDK2", 00:16:23.320 "model_number": "SPDK bdev Controller", 00:16:23.320 "max_namespaces": 32, 00:16:23.320 "min_cntlid": 1, 00:16:23.320 "max_cntlid": 65519, 00:16:23.320 "namespaces": [ 00:16:23.320 { 00:16:23.320 "nsid": 1, 00:16:23.320 "bdev_name": "Malloc2", 00:16:23.320 "name": "Malloc2", 00:16:23.320 "nguid": "C8A9C988110246E38AF0266F1FE3047C", 00:16:23.320 "uuid": "c8a9c988-1102-46e3-8af0-266f1fe3047c" 00:16:23.320 } 00:16:23.320 ] 00:16:23.320 } 00:16:23.320 ] 00:16:23.320 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:23.320 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2483969 00:16:23.320 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:23.320 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:23.320 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:16:23.321 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:23.321 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:23.321 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:16:23.321 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:23.321 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:23.321 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.321 Malloc4 00:16:23.321 [2024-07-24 18:53:08.325548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.580 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:23.580 [2024-07-24 18:53:08.575193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.840 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:23.840 Asynchronous Event Request test 00:16:23.840 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.840 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.840 Registering asynchronous event callbacks... 00:16:23.840 Starting namespace attribute notice tests for all controllers... 00:16:23.840 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:23.840 aer_cb - Changed Namespace 00:16:23.840 Cleaning up... 00:16:23.840 [ 00:16:23.840 { 00:16:23.840 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:23.840 "subtype": "Discovery", 00:16:23.840 "listen_addresses": [], 00:16:23.840 "allow_any_host": true, 00:16:23.840 "hosts": [] 00:16:23.840 }, 00:16:23.840 { 00:16:23.840 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:23.840 "subtype": "NVMe", 00:16:23.840 "listen_addresses": [ 00:16:23.840 { 00:16:23.840 "trtype": "VFIOUSER", 00:16:23.840 "adrfam": "IPv4", 00:16:23.840 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:23.840 "trsvcid": "0" 00:16:23.840 } 00:16:23.840 ], 00:16:23.840 "allow_any_host": true, 00:16:23.840 "hosts": [], 00:16:23.840 "serial_number": "SPDK1", 00:16:23.840 "model_number": "SPDK bdev Controller", 00:16:23.840 "max_namespaces": 32, 00:16:23.840 "min_cntlid": 1, 00:16:23.840 "max_cntlid": 65519, 00:16:23.840 "namespaces": [ 00:16:23.840 { 00:16:23.840 "nsid": 1, 00:16:23.840 "bdev_name": "Malloc1", 00:16:23.840 "name": "Malloc1", 00:16:23.840 "nguid": "DE35D71EE14A409798FFF4475E19FB15", 00:16:23.840 "uuid": "de35d71e-e14a-4097-98ff-f4475e19fb15" 00:16:23.840 }, 00:16:23.840 { 00:16:23.840 "nsid": 2, 00:16:23.840 "bdev_name": "Malloc3", 00:16:23.840 "name": "Malloc3", 00:16:23.840 "nguid": "AF6792E52CD74C60B3279E9C6DF94759", 00:16:23.840 "uuid": "af6792e5-2cd7-4c60-b327-9e9c6df94759" 00:16:23.840 } 00:16:23.840 ] 00:16:23.840 }, 00:16:23.840 { 00:16:23.840 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:23.840 "subtype": "NVMe", 00:16:23.840 "listen_addresses": [ 00:16:23.840 { 00:16:23.840 "trtype": "VFIOUSER", 00:16:23.840 "adrfam": "IPv4", 00:16:23.840 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:23.840 "trsvcid": "0" 00:16:23.840 } 00:16:23.840 ], 00:16:23.840 "allow_any_host": true, 00:16:23.840 "hosts": [], 00:16:23.840 "serial_number": "SPDK2", 00:16:23.840 "model_number": "SPDK bdev Controller", 00:16:23.840 "max_namespaces": 32, 00:16:23.840 "min_cntlid": 1, 00:16:23.840 "max_cntlid": 65519, 00:16:23.840 "namespaces": [ 00:16:23.840 { 00:16:23.840 "nsid": 1, 00:16:23.840 "bdev_name": "Malloc2", 00:16:23.840 "name": "Malloc2", 00:16:23.840 "nguid": "C8A9C988110246E38AF0266F1FE3047C", 00:16:23.840 "uuid": "c8a9c988-1102-46e3-8af0-266f1fe3047c" 00:16:23.840 }, 00:16:23.840 { 00:16:23.840 "nsid": 2, 00:16:23.840 "bdev_name": "Malloc4", 00:16:23.840 "name": "Malloc4", 00:16:23.840 "nguid": "1F5219365C984363838DABCF4B116C22", 00:16:23.840 "uuid": "1f521936-5c98-4363-838d-abcf4b116c22" 00:16:23.840 } 00:16:23.840 ] 00:16:23.840 } 00:16:23.840 ] 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2483969 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2475331 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2475331 ']' 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2475331 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2475331 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2475331' 00:16:24.099 killing process with pid 2475331 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2475331 00:16:24.099 18:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2475331 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2484023 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2484023' 00:16:24.359 Process pid: 2484023 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2484023 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2484023 ']' 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.359 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:24.359 [2024-07-24 18:53:09.264914] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:24.359 [2024-07-24 18:53:09.266196] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:16:24.359 [2024-07-24 18:53:09.266246] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.359 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.359 [2024-07-24 18:53:09.347524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.617 [2024-07-24 18:53:09.438332] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.617 [2024-07-24 18:53:09.438374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.617 [2024-07-24 18:53:09.438385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.617 [2024-07-24 18:53:09.438394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.617 [2024-07-24 18:53:09.438401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.617 [2024-07-24 18:53:09.438463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.617 [2024-07-24 18:53:09.438593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.617 [2024-07-24 18:53:09.438708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.617 [2024-07-24 18:53:09.438708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.617 [2024-07-24 18:53:09.523662] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:24.617 [2024-07-24 18:53:09.524558] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:24.617 [2024-07-24 18:53:09.524575] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:24.617 [2024-07-24 18:53:09.524765] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:24.617 [2024-07-24 18:53:09.524993] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:24.617 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.617 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:16:24.617 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:25.555 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:25.815 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:25.815 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:25.815 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:25.815 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:26.073 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:26.073 Malloc1 00:16:26.332 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:26.332 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:26.591 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:26.850 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:26.850 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:26.850 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:27.109 Malloc2 00:16:27.109 18:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:27.369 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:27.628 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2484023 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2484023 ']' 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2484023 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2484023 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2484023' 00:16:27.887 killing process with pid 2484023 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2484023 00:16:27.887 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2484023 00:16:28.146 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:28.146 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:28.146 00:16:28.146 real 0m52.979s 00:16:28.146 user 3m25.069s 00:16:28.146 sys 0m3.621s 00:16:28.146 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.146 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:28.146 ************************************ 00:16:28.146 END TEST nvmf_vfio_user 00:16:28.146 ************************************ 00:16:28.146 18:53:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:28.146 18:53:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:28.146 18:53:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.146 18:53:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.146 ************************************ 00:16:28.146 START TEST nvmf_vfio_user_nvme_compliance 00:16:28.146 ************************************ 00:16:28.146 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:28.405 * Looking for test storage... 00:16:28.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.405 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2484866 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2484866' 00:16:28.406 Process pid: 2484866 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2484866 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2484866 ']' 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.406 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:28.406 [2024-07-24 18:53:13.293895] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:16:28.406 [2024-07-24 18:53:13.293958] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.406 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.406 [2024-07-24 18:53:13.377650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:28.665 [2024-07-24 18:53:13.468899] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.665 [2024-07-24 18:53:13.468941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.665 [2024-07-24 18:53:13.468951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.665 [2024-07-24 18:53:13.468960] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.665 [2024-07-24 18:53:13.468972] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.665 [2024-07-24 18:53:13.469031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.665 [2024-07-24 18:53:13.469072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.665 [2024-07-24 18:53:13.469073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.665 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.665 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:28.665 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:29.600 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:29.600 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:29.600 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:29.600 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.600 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.600 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.600 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:29.600 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:29.600 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.600 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.858 malloc0 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.858 18:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:29.858 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.858 00:16:29.858 00:16:29.858 CUnit - A unit testing framework for C - Version 2.1-3 00:16:29.858 http://cunit.sourceforge.net/ 00:16:29.858 00:16:29.858 00:16:29.858 Suite: nvme_compliance 00:16:29.858 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 18:53:14.859287] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.858 [2024-07-24 18:53:14.860824] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:29.858 [2024-07-24 18:53:14.860849] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:29.858 [2024-07-24 18:53:14.860860] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:29.858 [2024-07-24 18:53:14.862320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.117 passed 00:16:30.117 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 18:53:14.964538] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.117 [2024-07-24 18:53:14.967564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.117 passed 00:16:30.117 Test: admin_identify_ns ...[2024-07-24 18:53:15.070447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.375 [2024-07-24 18:53:15.130624] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:30.375 [2024-07-24 18:53:15.138616] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:30.375 [2024-07-24 18:53:15.159766] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.375 passed 00:16:30.375 Test: admin_get_features_mandatory_features ...[2024-07-24 18:53:15.255724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.375 [2024-07-24 18:53:15.258738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.376 passed 00:16:30.376 Test: admin_get_features_optional_features ...[2024-07-24 18:53:15.357799] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.376 [2024-07-24 18:53:15.360842] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.634 passed 00:16:30.634 Test: admin_set_features_number_of_queues ...[2024-07-24 18:53:15.461005] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.634 [2024-07-24 18:53:15.565724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.634 passed 00:16:30.893 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 18:53:15.666023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.893 [2024-07-24 18:53:15.669076] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.893 passed 00:16:30.893 Test: admin_get_log_page_with_lpo ...[2024-07-24 18:53:15.768452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.893 [2024-07-24 18:53:15.834621] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:30.893 [2024-07-24 18:53:15.847694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.893 passed 00:16:31.151 Test: fabric_property_get ...[2024-07-24 18:53:15.946135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.151 [2024-07-24 18:53:15.947495] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:31.151 [2024-07-24 18:53:15.950177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.151 passed 00:16:31.151 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 18:53:16.049093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.151 [2024-07-24 18:53:16.050572] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:31.151 [2024-07-24 18:53:16.052129] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.151 passed 00:16:31.151 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 18:53:16.149307] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.409 [2024-07-24 18:53:16.236620] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:31.409 [2024-07-24 18:53:16.252611] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:31.409 [2024-07-24 18:53:16.257720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.409 passed 00:16:31.409 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 18:53:16.353728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.409 [2024-07-24 18:53:16.355215] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:31.409 [2024-07-24 18:53:16.356772] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.409 passed 00:16:31.668 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 18:53:16.455898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.668 [2024-07-24 18:53:16.532618] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:31.668 [2024-07-24 18:53:16.556615] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:31.668 [2024-07-24 18:53:16.561737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.668 passed 00:16:31.668 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 18:53:16.658884] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.668 [2024-07-24 18:53:16.663403] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:31.668 [2024-07-24 18:53:16.663452] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:31.668 [2024-07-24 18:53:16.664946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.927 passed 00:16:31.927 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 18:53:16.760444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.927 [2024-07-24 18:53:16.851628] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:31.927 [2024-07-24 18:53:16.858447] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:31.927 [2024-07-24 18:53:16.866618] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:31.927 [2024-07-24 18:53:16.874620] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:31.927 [2024-07-24 18:53:16.903710] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.185 passed 00:16:32.185 Test: admin_create_io_sq_verify_pc ...[2024-07-24 18:53:17.004035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.185 [2024-07-24 18:53:17.022634] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:32.185 [2024-07-24 18:53:17.040358] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.185 passed 00:16:32.185 Test: admin_create_io_qp_max_qps ...[2024-07-24 18:53:17.137370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.562 [2024-07-24 18:53:18.248617] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:33.886 [2024-07-24 18:53:18.640976] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.886 passed 00:16:33.886 Test: admin_create_io_sq_shared_cq ...[2024-07-24 18:53:18.738431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.145 [2024-07-24 18:53:18.869620] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:34.145 [2024-07-24 18:53:18.906686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.145 passed 00:16:34.145 00:16:34.145 Run Summary: Type Total Ran Passed Failed Inactive 00:16:34.145 suites 1 1 n/a 0 0 00:16:34.145 tests 18 18 18 0 0 00:16:34.145 asserts 360 360 360 0 n/a 00:16:34.145 00:16:34.145 Elapsed time = 1.705 seconds 00:16:34.145 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2484866 00:16:34.145 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2484866 ']' 00:16:34.145 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2484866 00:16:34.145 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:34.145 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.145 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2484866 00:16:34.145 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:34.145 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:34.145 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2484866' 00:16:34.145 killing process with pid 2484866 00:16:34.145 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2484866 00:16:34.145 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2484866 00:16:34.403 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:34.403 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:34.403 00:16:34.403 real 0m6.143s 00:16:34.403 user 0m17.296s 00:16:34.403 sys 0m0.482s 00:16:34.403 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.403 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:34.403 ************************************ 00:16:34.403 END TEST nvmf_vfio_user_nvme_compliance 00:16:34.403 ************************************ 00:16:34.403 18:53:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:34.403 18:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:34.403 18:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.403 18:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.403 ************************************ 00:16:34.403 START TEST nvmf_vfio_user_fuzz 00:16:34.403 ************************************ 00:16:34.403 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:34.403 * Looking for test storage... 00:16:34.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2485975 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2485975' 00:16:34.662 Process pid: 2485975 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2485975 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2485975 ']' 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.662 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:34.921 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.921 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:34.921 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:35.873 malloc0 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:35.873 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:07.950 Fuzzing completed. Shutting down the fuzz application 00:17:07.950 00:17:07.950 Dumping successful admin opcodes: 00:17:07.950 8, 9, 10, 24, 00:17:07.950 Dumping successful io opcodes: 00:17:07.950 0, 00:17:07.950 NS: 0x200003a1ef00 I/O qp, Total commands completed: 599057, total successful commands: 2317, random_seed: 167726208 00:17:07.950 NS: 0x200003a1ef00 admin qp, Total commands completed: 147476, total successful commands: 1192, random_seed: 3439588736 00:17:07.950 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:07.950 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.950 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:07.950 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.950 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2485975 00:17:07.950 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2485975 ']' 00:17:07.950 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2485975 00:17:07.950 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2485975 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2485975' 00:17:07.951 killing process with pid 2485975 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2485975 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2485975 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:07.951 00:17:07.951 real 0m32.369s 00:17:07.951 user 0m36.685s 00:17:07.951 sys 0m25.212s 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:07.951 ************************************ 00:17:07.951 END TEST nvmf_vfio_user_fuzz 00:17:07.951 ************************************ 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.951 ************************************ 00:17:07.951 START TEST nvmf_auth_target 00:17:07.951 ************************************ 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:07.951 * Looking for test storage... 00:17:07.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:07.951 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:13.224 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.224 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:13.225 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:13.225 Found net devices under 0000:af:00.0: cvl_0_0 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:13.225 Found net devices under 0000:af:00.1: cvl_0_1 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:13.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:17:13.225 00:17:13.225 --- 10.0.0.2 ping statistics --- 00:17:13.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.225 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:17:13.225 00:17:13.225 --- 10.0.0.1 ping statistics --- 00:17:13.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.225 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2494880 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2494880 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2494880 ']' 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.225 18:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2494927 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.225 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e97e713c8227fdb165e07d2775c3a18d9e377cb602c50019 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.YpB 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e97e713c8227fdb165e07d2775c3a18d9e377cb602c50019 0 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e97e713c8227fdb165e07d2775c3a18d9e377cb602c50019 0 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e97e713c8227fdb165e07d2775c3a18d9e377cb602c50019 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.YpB 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.YpB 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.YpB 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:13.226 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a6edf83c2a6d0e0cac4bedb1f828dcba11598879c0b8b5917887df8536819e80 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DpC 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a6edf83c2a6d0e0cac4bedb1f828dcba11598879c0b8b5917887df8536819e80 3 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a6edf83c2a6d0e0cac4bedb1f828dcba11598879c0b8b5917887df8536819e80 3 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a6edf83c2a6d0e0cac4bedb1f828dcba11598879c0b8b5917887df8536819e80 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DpC 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DpC 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.DpC 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b0871b3b3db97cd6bd71e5731322ef82 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VZF 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b0871b3b3db97cd6bd71e5731322ef82 1 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b0871b3b3db97cd6bd71e5731322ef82 1 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b0871b3b3db97cd6bd71e5731322ef82 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VZF 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VZF 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.VZF 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f3d81ba0aa75092eda9a0a5dae7eb3b31ef78ef41e019a40 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7VT 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f3d81ba0aa75092eda9a0a5dae7eb3b31ef78ef41e019a40 2 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f3d81ba0aa75092eda9a0a5dae7eb3b31ef78ef41e019a40 2 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f3d81ba0aa75092eda9a0a5dae7eb3b31ef78ef41e019a40 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7VT 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7VT 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.7VT 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d094ccb7dc358898e9d9c842ddbafefefc644ec420bdfb3f 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.z6v 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d094ccb7dc358898e9d9c842ddbafefefc644ec420bdfb3f 2 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d094ccb7dc358898e9d9c842ddbafefefc644ec420bdfb3f 2 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d094ccb7dc358898e9d9c842ddbafefefc644ec420bdfb3f 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:13.485 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.z6v 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.z6v 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.z6v 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c0abb85d58b6f6590d20cbcafd568669 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HaR 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c0abb85d58b6f6590d20cbcafd568669 1 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c0abb85d58b6f6590d20cbcafd568669 1 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c0abb85d58b6f6590d20cbcafd568669 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HaR 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HaR 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.HaR 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7654c3b1e5d59b242f3e9c0b09bb7fdf9ca915a2ab1021d2ddf6fd5ff4afbe40 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rTu 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7654c3b1e5d59b242f3e9c0b09bb7fdf9ca915a2ab1021d2ddf6fd5ff4afbe40 3 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7654c3b1e5d59b242f3e9c0b09bb7fdf9ca915a2ab1021d2ddf6fd5ff4afbe40 3 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7654c3b1e5d59b242f3e9c0b09bb7fdf9ca915a2ab1021d2ddf6fd5ff4afbe40 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rTu 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rTu 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.rTu 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2494880 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2494880 ']' 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.744 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.003 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.003 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:14.003 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2494927 /var/tmp/host.sock 00:17:14.003 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2494927 ']' 00:17:14.003 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:14.003 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.003 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:14.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:14.003 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.003 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YpB 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.YpB 00:17:14.262 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.YpB 00:17:14.521 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.DpC ]] 00:17:14.521 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DpC 00:17:14.521 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.521 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.521 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.521 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DpC 00:17:14.521 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DpC 00:17:14.779 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:14.779 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VZF 00:17:14.779 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.779 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.779 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.779 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.VZF 00:17:14.779 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.VZF 00:17:15.038 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.7VT ]] 00:17:15.038 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7VT 00:17:15.038 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.038 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.038 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.038 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7VT 00:17:15.038 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7VT 00:17:15.296 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:15.296 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.z6v 00:17:15.296 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.296 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.296 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.296 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.z6v 00:17:15.296 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.z6v 00:17:15.554 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.HaR ]] 00:17:15.554 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HaR 00:17:15.554 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.554 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.554 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.554 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HaR 00:17:15.554 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HaR 00:17:15.812 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:15.812 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rTu 00:17:15.812 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.812 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.812 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.812 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.rTu 00:17:15.812 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.rTu 00:17:16.070 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:16.070 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:16.070 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.070 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.070 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.070 18:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.330 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.589 00:17:16.589 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.589 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.589 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.848 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.848 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.848 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.848 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.848 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.848 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.848 { 00:17:16.848 "cntlid": 1, 00:17:16.848 "qid": 0, 00:17:16.848 "state": "enabled", 00:17:16.848 "thread": "nvmf_tgt_poll_group_000", 00:17:16.848 "listen_address": { 00:17:16.848 "trtype": "TCP", 00:17:16.848 "adrfam": "IPv4", 00:17:16.848 "traddr": "10.0.0.2", 00:17:16.848 "trsvcid": "4420" 00:17:16.848 }, 00:17:16.848 "peer_address": { 00:17:16.848 "trtype": "TCP", 00:17:16.848 "adrfam": "IPv4", 00:17:16.848 "traddr": "10.0.0.1", 00:17:16.848 "trsvcid": "45774" 00:17:16.848 }, 00:17:16.848 "auth": { 00:17:16.848 "state": "completed", 00:17:16.848 "digest": "sha256", 00:17:16.848 "dhgroup": "null" 00:17:16.848 } 00:17:16.848 } 00:17:16.848 ]' 00:17:16.848 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.848 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.848 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.106 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:17.106 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.106 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.106 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.106 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.365 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:17:18.300 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.300 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.582 00:17:18.840 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.840 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.840 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.099 { 00:17:19.099 "cntlid": 3, 00:17:19.099 "qid": 0, 00:17:19.099 "state": "enabled", 00:17:19.099 "thread": "nvmf_tgt_poll_group_000", 00:17:19.099 "listen_address": { 00:17:19.099 "trtype": "TCP", 00:17:19.099 "adrfam": "IPv4", 00:17:19.099 "traddr": "10.0.0.2", 00:17:19.099 "trsvcid": "4420" 00:17:19.099 }, 00:17:19.099 "peer_address": { 00:17:19.099 "trtype": "TCP", 00:17:19.099 "adrfam": "IPv4", 00:17:19.099 "traddr": "10.0.0.1", 00:17:19.099 "trsvcid": "45790" 00:17:19.099 }, 00:17:19.099 "auth": { 00:17:19.099 "state": "completed", 00:17:19.099 "digest": "sha256", 00:17:19.099 "dhgroup": "null" 00:17:19.099 } 00:17:19.099 } 00:17:19.099 ]' 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:19.099 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.099 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.099 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.099 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.357 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:17:20.293 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.293 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:20.293 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.293 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.293 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.293 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.293 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:20.293 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.552 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.811 00:17:20.811 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.811 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.811 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.070 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.070 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.070 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.070 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.070 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.070 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.070 { 00:17:21.070 "cntlid": 5, 00:17:21.070 "qid": 0, 00:17:21.070 "state": "enabled", 00:17:21.070 "thread": "nvmf_tgt_poll_group_000", 00:17:21.070 "listen_address": { 00:17:21.070 "trtype": "TCP", 00:17:21.070 "adrfam": "IPv4", 00:17:21.070 "traddr": "10.0.0.2", 00:17:21.070 "trsvcid": "4420" 00:17:21.070 }, 00:17:21.070 "peer_address": { 00:17:21.070 "trtype": "TCP", 00:17:21.070 "adrfam": "IPv4", 00:17:21.070 "traddr": "10.0.0.1", 00:17:21.070 "trsvcid": "47382" 00:17:21.070 }, 00:17:21.070 "auth": { 00:17:21.070 "state": "completed", 00:17:21.070 "digest": "sha256", 00:17:21.070 "dhgroup": "null" 00:17:21.070 } 00:17:21.070 } 00:17:21.070 ]' 00:17:21.070 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.070 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.070 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.070 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:21.070 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.329 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.329 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.329 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.329 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:17:22.267 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.267 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:22.267 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.267 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.267 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.267 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.267 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.267 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.527 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.786 00:17:22.786 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.786 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.786 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.045 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.045 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.045 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.045 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.045 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.045 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.045 { 00:17:23.045 "cntlid": 7, 00:17:23.045 "qid": 0, 00:17:23.045 "state": "enabled", 00:17:23.045 "thread": "nvmf_tgt_poll_group_000", 00:17:23.045 "listen_address": { 00:17:23.045 "trtype": "TCP", 00:17:23.045 "adrfam": "IPv4", 00:17:23.045 "traddr": "10.0.0.2", 00:17:23.045 "trsvcid": "4420" 00:17:23.045 }, 00:17:23.045 "peer_address": { 00:17:23.045 "trtype": "TCP", 00:17:23.045 "adrfam": "IPv4", 00:17:23.045 "traddr": "10.0.0.1", 00:17:23.045 "trsvcid": "47406" 00:17:23.045 }, 00:17:23.045 "auth": { 00:17:23.045 "state": "completed", 00:17:23.045 "digest": "sha256", 00:17:23.045 "dhgroup": "null" 00:17:23.045 } 00:17:23.045 } 00:17:23.045 ]' 00:17:23.045 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.045 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.045 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.045 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:23.045 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.045 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.045 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.045 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.612 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:17:24.179 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.179 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:24.179 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.179 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.179 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.179 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.179 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.179 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.179 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.438 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.697 00:17:24.697 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.697 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.697 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.955 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.955 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.955 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.955 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.955 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.955 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.955 { 00:17:24.955 "cntlid": 9, 00:17:24.955 "qid": 0, 00:17:24.955 "state": "enabled", 00:17:24.955 "thread": "nvmf_tgt_poll_group_000", 00:17:24.955 "listen_address": { 00:17:24.955 "trtype": "TCP", 00:17:24.955 "adrfam": "IPv4", 00:17:24.955 "traddr": "10.0.0.2", 00:17:24.955 "trsvcid": "4420" 00:17:24.955 }, 00:17:24.955 "peer_address": { 00:17:24.955 "trtype": "TCP", 00:17:24.955 "adrfam": "IPv4", 00:17:24.955 "traddr": "10.0.0.1", 00:17:24.955 "trsvcid": "47426" 00:17:24.955 }, 00:17:24.955 "auth": { 00:17:24.955 "state": "completed", 00:17:24.955 "digest": "sha256", 00:17:24.955 "dhgroup": "ffdhe2048" 00:17:24.955 } 00:17:24.955 } 00:17:24.955 ]' 00:17:24.955 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.215 18:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.215 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.215 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.215 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.215 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.215 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.215 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.473 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:17:26.409 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.409 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:26.409 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.409 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.409 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.409 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.409 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.409 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.667 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.668 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.235 00:17:27.235 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.235 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.235 18:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.494 { 00:17:27.494 "cntlid": 11, 00:17:27.494 "qid": 0, 00:17:27.494 "state": "enabled", 00:17:27.494 "thread": "nvmf_tgt_poll_group_000", 00:17:27.494 "listen_address": { 00:17:27.494 "trtype": "TCP", 00:17:27.494 "adrfam": "IPv4", 00:17:27.494 "traddr": "10.0.0.2", 00:17:27.494 "trsvcid": "4420" 00:17:27.494 }, 00:17:27.494 "peer_address": { 00:17:27.494 "trtype": "TCP", 00:17:27.494 "adrfam": "IPv4", 00:17:27.494 "traddr": "10.0.0.1", 00:17:27.494 "trsvcid": "47458" 00:17:27.494 }, 00:17:27.494 "auth": { 00:17:27.494 "state": "completed", 00:17:27.494 "digest": "sha256", 00:17:27.494 "dhgroup": "ffdhe2048" 00:17:27.494 } 00:17:27.494 } 00:17:27.494 ]' 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.494 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.753 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:17:28.689 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.689 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:28.689 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.689 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.689 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.689 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.689 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.689 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.947 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.205 00:17:29.206 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.206 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.206 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.465 { 00:17:29.465 "cntlid": 13, 00:17:29.465 "qid": 0, 00:17:29.465 "state": "enabled", 00:17:29.465 "thread": "nvmf_tgt_poll_group_000", 00:17:29.465 "listen_address": { 00:17:29.465 "trtype": "TCP", 00:17:29.465 "adrfam": "IPv4", 00:17:29.465 "traddr": "10.0.0.2", 00:17:29.465 "trsvcid": "4420" 00:17:29.465 }, 00:17:29.465 "peer_address": { 00:17:29.465 "trtype": "TCP", 00:17:29.465 "adrfam": "IPv4", 00:17:29.465 "traddr": "10.0.0.1", 00:17:29.465 "trsvcid": "37018" 00:17:29.465 }, 00:17:29.465 "auth": { 00:17:29.465 "state": "completed", 00:17:29.465 "digest": "sha256", 00:17:29.465 "dhgroup": "ffdhe2048" 00:17:29.465 } 00:17:29.465 } 00:17:29.465 ]' 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.465 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.724 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.724 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.724 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.724 18:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:17:30.659 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.659 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:30.659 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.659 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.659 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.659 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.659 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:30.659 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.918 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.486 00:17:31.486 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.486 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.486 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.745 { 00:17:31.745 "cntlid": 15, 00:17:31.745 "qid": 0, 00:17:31.745 "state": "enabled", 00:17:31.745 "thread": "nvmf_tgt_poll_group_000", 00:17:31.745 "listen_address": { 00:17:31.745 "trtype": "TCP", 00:17:31.745 "adrfam": "IPv4", 00:17:31.745 "traddr": "10.0.0.2", 00:17:31.745 "trsvcid": "4420" 00:17:31.745 }, 00:17:31.745 "peer_address": { 00:17:31.745 "trtype": "TCP", 00:17:31.745 "adrfam": "IPv4", 00:17:31.745 "traddr": "10.0.0.1", 00:17:31.745 "trsvcid": "37054" 00:17:31.745 }, 00:17:31.745 "auth": { 00:17:31.745 "state": "completed", 00:17:31.745 "digest": "sha256", 00:17:31.745 "dhgroup": "ffdhe2048" 00:17:31.745 } 00:17:31.745 } 00:17:31.745 ]' 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.745 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.004 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:17:32.975 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.975 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:32.975 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.975 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.975 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.975 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.975 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.975 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.975 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.235 18:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.493 00:17:33.493 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.493 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.493 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.752 { 00:17:33.752 "cntlid": 17, 00:17:33.752 "qid": 0, 00:17:33.752 "state": "enabled", 00:17:33.752 "thread": "nvmf_tgt_poll_group_000", 00:17:33.752 "listen_address": { 00:17:33.752 "trtype": "TCP", 00:17:33.752 "adrfam": "IPv4", 00:17:33.752 "traddr": "10.0.0.2", 00:17:33.752 "trsvcid": "4420" 00:17:33.752 }, 00:17:33.752 "peer_address": { 00:17:33.752 "trtype": "TCP", 00:17:33.752 "adrfam": "IPv4", 00:17:33.752 "traddr": "10.0.0.1", 00:17:33.752 "trsvcid": "37094" 00:17:33.752 }, 00:17:33.752 "auth": { 00:17:33.752 "state": "completed", 00:17:33.752 "digest": "sha256", 00:17:33.752 "dhgroup": "ffdhe3072" 00:17:33.752 } 00:17:33.752 } 00:17:33.752 ]' 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.752 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.010 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.010 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.010 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.269 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:17:34.836 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.836 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:34.836 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.836 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.836 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.836 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.836 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.836 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.094 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.095 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.353 00:17:35.353 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.353 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.353 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.611 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.611 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.611 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.611 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.611 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.611 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.611 { 00:17:35.611 "cntlid": 19, 00:17:35.611 "qid": 0, 00:17:35.611 "state": "enabled", 00:17:35.611 "thread": "nvmf_tgt_poll_group_000", 00:17:35.611 "listen_address": { 00:17:35.611 "trtype": "TCP", 00:17:35.611 "adrfam": "IPv4", 00:17:35.611 "traddr": "10.0.0.2", 00:17:35.611 "trsvcid": "4420" 00:17:35.611 }, 00:17:35.611 "peer_address": { 00:17:35.611 "trtype": "TCP", 00:17:35.611 "adrfam": "IPv4", 00:17:35.611 "traddr": "10.0.0.1", 00:17:35.611 "trsvcid": "37116" 00:17:35.611 }, 00:17:35.611 "auth": { 00:17:35.611 "state": "completed", 00:17:35.611 "digest": "sha256", 00:17:35.611 "dhgroup": "ffdhe3072" 00:17:35.611 } 00:17:35.611 } 00:17:35.611 ]' 00:17:35.611 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.611 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.611 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.870 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.870 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.870 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.870 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.870 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.128 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:17:36.695 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.695 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:36.695 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.695 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.695 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.695 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.695 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.695 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.953 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.212 00:17:37.470 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.470 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.470 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.470 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.470 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.470 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.470 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.729 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.729 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.729 { 00:17:37.729 "cntlid": 21, 00:17:37.729 "qid": 0, 00:17:37.729 "state": "enabled", 00:17:37.729 "thread": "nvmf_tgt_poll_group_000", 00:17:37.729 "listen_address": { 00:17:37.729 "trtype": "TCP", 00:17:37.729 "adrfam": "IPv4", 00:17:37.729 "traddr": "10.0.0.2", 00:17:37.729 "trsvcid": "4420" 00:17:37.729 }, 00:17:37.729 "peer_address": { 00:17:37.729 "trtype": "TCP", 00:17:37.729 "adrfam": "IPv4", 00:17:37.729 "traddr": "10.0.0.1", 00:17:37.729 "trsvcid": "37148" 00:17:37.729 }, 00:17:37.729 "auth": { 00:17:37.729 "state": "completed", 00:17:37.729 "digest": "sha256", 00:17:37.729 "dhgroup": "ffdhe3072" 00:17:37.729 } 00:17:37.729 } 00:17:37.729 ]' 00:17:37.729 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.729 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.729 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.729 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.729 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.729 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.729 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.729 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.988 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:17:38.923 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.923 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:38.923 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.923 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.923 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.923 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.923 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:38.923 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.182 18:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.440 00:17:39.440 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.440 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.440 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.699 { 00:17:39.699 "cntlid": 23, 00:17:39.699 "qid": 0, 00:17:39.699 "state": "enabled", 00:17:39.699 "thread": "nvmf_tgt_poll_group_000", 00:17:39.699 "listen_address": { 00:17:39.699 "trtype": "TCP", 00:17:39.699 "adrfam": "IPv4", 00:17:39.699 "traddr": "10.0.0.2", 00:17:39.699 "trsvcid": "4420" 00:17:39.699 }, 00:17:39.699 "peer_address": { 00:17:39.699 "trtype": "TCP", 00:17:39.699 "adrfam": "IPv4", 00:17:39.699 "traddr": "10.0.0.1", 00:17:39.699 "trsvcid": "36456" 00:17:39.699 }, 00:17:39.699 "auth": { 00:17:39.699 "state": "completed", 00:17:39.699 "digest": "sha256", 00:17:39.699 "dhgroup": "ffdhe3072" 00:17:39.699 } 00:17:39.699 } 00:17:39.699 ]' 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.699 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.958 18:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:17:40.900 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.900 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:40.900 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.900 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.900 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.901 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.901 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.901 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:40.901 18:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.159 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.416 00:17:41.416 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.416 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.416 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.674 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.674 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.674 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.674 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.674 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.674 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.674 { 00:17:41.674 "cntlid": 25, 00:17:41.674 "qid": 0, 00:17:41.674 "state": "enabled", 00:17:41.674 "thread": "nvmf_tgt_poll_group_000", 00:17:41.674 "listen_address": { 00:17:41.674 "trtype": "TCP", 00:17:41.674 "adrfam": "IPv4", 00:17:41.674 "traddr": "10.0.0.2", 00:17:41.674 "trsvcid": "4420" 00:17:41.674 }, 00:17:41.674 "peer_address": { 00:17:41.674 "trtype": "TCP", 00:17:41.674 "adrfam": "IPv4", 00:17:41.674 "traddr": "10.0.0.1", 00:17:41.674 "trsvcid": "36482" 00:17:41.674 }, 00:17:41.674 "auth": { 00:17:41.674 "state": "completed", 00:17:41.674 "digest": "sha256", 00:17:41.674 "dhgroup": "ffdhe4096" 00:17:41.674 } 00:17:41.674 } 00:17:41.674 ]' 00:17:41.674 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.933 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.933 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.933 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.933 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.933 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.933 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.933 18:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.191 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:17:43.126 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.126 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:43.126 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.126 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.126 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.126 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.126 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.126 18:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.126 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:43.126 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.126 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.126 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:43.126 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.126 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.126 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.127 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.127 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.127 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.127 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.127 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.694 00:17:43.694 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.694 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.694 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.694 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.694 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.694 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.694 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.952 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.952 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.952 { 00:17:43.952 "cntlid": 27, 00:17:43.952 "qid": 0, 00:17:43.952 "state": "enabled", 00:17:43.952 "thread": "nvmf_tgt_poll_group_000", 00:17:43.952 "listen_address": { 00:17:43.952 "trtype": "TCP", 00:17:43.952 "adrfam": "IPv4", 00:17:43.952 "traddr": "10.0.0.2", 00:17:43.952 "trsvcid": "4420" 00:17:43.952 }, 00:17:43.952 "peer_address": { 00:17:43.952 "trtype": "TCP", 00:17:43.952 "adrfam": "IPv4", 00:17:43.952 "traddr": "10.0.0.1", 00:17:43.952 "trsvcid": "36510" 00:17:43.952 }, 00:17:43.952 "auth": { 00:17:43.952 "state": "completed", 00:17:43.952 "digest": "sha256", 00:17:43.952 "dhgroup": "ffdhe4096" 00:17:43.952 } 00:17:43.952 } 00:17:43.952 ]' 00:17:43.952 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.952 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.952 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.952 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.952 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.952 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.953 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.953 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.211 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:17:45.147 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.147 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:45.148 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.148 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.148 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.148 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.148 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:45.148 18:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.148 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.714 00:17:45.714 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.714 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.714 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.972 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.973 { 00:17:45.973 "cntlid": 29, 00:17:45.973 "qid": 0, 00:17:45.973 "state": "enabled", 00:17:45.973 "thread": "nvmf_tgt_poll_group_000", 00:17:45.973 "listen_address": { 00:17:45.973 "trtype": "TCP", 00:17:45.973 "adrfam": "IPv4", 00:17:45.973 "traddr": "10.0.0.2", 00:17:45.973 "trsvcid": "4420" 00:17:45.973 }, 00:17:45.973 "peer_address": { 00:17:45.973 "trtype": "TCP", 00:17:45.973 "adrfam": "IPv4", 00:17:45.973 "traddr": "10.0.0.1", 00:17:45.973 "trsvcid": "36540" 00:17:45.973 }, 00:17:45.973 "auth": { 00:17:45.973 "state": "completed", 00:17:45.973 "digest": "sha256", 00:17:45.973 "dhgroup": "ffdhe4096" 00:17:45.973 } 00:17:45.973 } 00:17:45.973 ]' 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.973 18:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.231 18:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:17:47.167 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.168 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:47.168 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.168 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.168 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.168 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.168 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:47.168 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.426 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.030 00:17:48.030 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.030 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.030 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.030 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.030 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.030 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.030 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.030 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.030 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.030 { 00:17:48.030 "cntlid": 31, 00:17:48.030 "qid": 0, 00:17:48.030 "state": "enabled", 00:17:48.030 "thread": "nvmf_tgt_poll_group_000", 00:17:48.030 "listen_address": { 00:17:48.030 "trtype": "TCP", 00:17:48.030 "adrfam": "IPv4", 00:17:48.030 "traddr": "10.0.0.2", 00:17:48.030 "trsvcid": "4420" 00:17:48.030 }, 00:17:48.030 "peer_address": { 00:17:48.030 "trtype": "TCP", 00:17:48.030 "adrfam": "IPv4", 00:17:48.030 "traddr": "10.0.0.1", 00:17:48.030 "trsvcid": "36566" 00:17:48.030 }, 00:17:48.030 "auth": { 00:17:48.030 "state": "completed", 00:17:48.030 "digest": "sha256", 00:17:48.030 "dhgroup": "ffdhe4096" 00:17:48.030 } 00:17:48.030 } 00:17:48.030 ]' 00:17:48.030 18:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.030 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.289 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.289 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.289 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.289 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.289 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.289 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.546 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.480 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.048 00:17:50.048 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.048 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.048 18:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.307 { 00:17:50.307 "cntlid": 33, 00:17:50.307 "qid": 0, 00:17:50.307 "state": "enabled", 00:17:50.307 "thread": "nvmf_tgt_poll_group_000", 00:17:50.307 "listen_address": { 00:17:50.307 "trtype": "TCP", 00:17:50.307 "adrfam": "IPv4", 00:17:50.307 "traddr": "10.0.0.2", 00:17:50.307 "trsvcid": "4420" 00:17:50.307 }, 00:17:50.307 "peer_address": { 00:17:50.307 "trtype": "TCP", 00:17:50.307 "adrfam": "IPv4", 00:17:50.307 "traddr": "10.0.0.1", 00:17:50.307 "trsvcid": "37602" 00:17:50.307 }, 00:17:50.307 "auth": { 00:17:50.307 "state": "completed", 00:17:50.307 "digest": "sha256", 00:17:50.307 "dhgroup": "ffdhe6144" 00:17:50.307 } 00:17:50.307 } 00:17:50.307 ]' 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.307 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.567 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.567 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.567 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.827 18:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:17:51.763 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.763 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:51.763 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.763 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.763 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.763 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.763 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.764 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.022 18:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.592 00:17:52.592 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.592 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.592 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.851 { 00:17:52.851 "cntlid": 35, 00:17:52.851 "qid": 0, 00:17:52.851 "state": "enabled", 00:17:52.851 "thread": "nvmf_tgt_poll_group_000", 00:17:52.851 "listen_address": { 00:17:52.851 "trtype": "TCP", 00:17:52.851 "adrfam": "IPv4", 00:17:52.851 "traddr": "10.0.0.2", 00:17:52.851 "trsvcid": "4420" 00:17:52.851 }, 00:17:52.851 "peer_address": { 00:17:52.851 "trtype": "TCP", 00:17:52.851 "adrfam": "IPv4", 00:17:52.851 "traddr": "10.0.0.1", 00:17:52.851 "trsvcid": "37616" 00:17:52.851 }, 00:17:52.851 "auth": { 00:17:52.851 "state": "completed", 00:17:52.851 "digest": "sha256", 00:17:52.851 "dhgroup": "ffdhe6144" 00:17:52.851 } 00:17:52.851 } 00:17:52.851 ]' 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.851 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.109 18:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:17:53.676 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.676 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:53.676 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.676 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.676 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.676 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.676 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.935 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.503 00:17:54.503 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.503 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.503 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.763 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.763 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.763 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.763 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.763 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.763 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.763 { 00:17:54.763 "cntlid": 37, 00:17:54.763 "qid": 0, 00:17:54.763 "state": "enabled", 00:17:54.763 "thread": "nvmf_tgt_poll_group_000", 00:17:54.763 "listen_address": { 00:17:54.763 "trtype": "TCP", 00:17:54.763 "adrfam": "IPv4", 00:17:54.763 "traddr": "10.0.0.2", 00:17:54.763 "trsvcid": "4420" 00:17:54.763 }, 00:17:54.763 "peer_address": { 00:17:54.763 "trtype": "TCP", 00:17:54.763 "adrfam": "IPv4", 00:17:54.763 "traddr": "10.0.0.1", 00:17:54.763 "trsvcid": "37644" 00:17:54.763 }, 00:17:54.763 "auth": { 00:17:54.763 "state": "completed", 00:17:54.763 "digest": "sha256", 00:17:54.763 "dhgroup": "ffdhe6144" 00:17:54.763 } 00:17:54.763 } 00:17:54.763 ]' 00:17:54.763 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.763 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.763 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.022 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.022 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.022 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.022 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.022 18:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.281 18:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:17:56.217 18:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.217 18:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:56.217 18:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.217 18:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.217 18:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.217 18:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.217 18:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:56.217 18:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.217 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.784 00:17:56.784 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.784 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.784 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.042 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.042 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.042 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.042 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.042 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.043 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.043 { 00:17:57.043 "cntlid": 39, 00:17:57.043 "qid": 0, 00:17:57.043 "state": "enabled", 00:17:57.043 "thread": "nvmf_tgt_poll_group_000", 00:17:57.043 "listen_address": { 00:17:57.043 "trtype": "TCP", 00:17:57.043 "adrfam": "IPv4", 00:17:57.043 "traddr": "10.0.0.2", 00:17:57.043 "trsvcid": "4420" 00:17:57.043 }, 00:17:57.043 "peer_address": { 00:17:57.043 "trtype": "TCP", 00:17:57.043 "adrfam": "IPv4", 00:17:57.043 "traddr": "10.0.0.1", 00:17:57.043 "trsvcid": "37678" 00:17:57.043 }, 00:17:57.043 "auth": { 00:17:57.043 "state": "completed", 00:17:57.043 "digest": "sha256", 00:17:57.043 "dhgroup": "ffdhe6144" 00:17:57.043 } 00:17:57.043 } 00:17:57.043 ]' 00:17:57.043 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.043 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.043 18:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.043 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.043 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.301 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.301 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.301 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.559 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:17:58.125 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.383 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:58.383 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.383 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.383 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.383 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.383 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.383 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.383 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.641 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.208 00:17:59.208 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.208 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.208 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.468 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.468 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.468 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.468 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.468 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.468 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.468 { 00:17:59.468 "cntlid": 41, 00:17:59.468 "qid": 0, 00:17:59.468 "state": "enabled", 00:17:59.468 "thread": "nvmf_tgt_poll_group_000", 00:17:59.468 "listen_address": { 00:17:59.468 "trtype": "TCP", 00:17:59.468 "adrfam": "IPv4", 00:17:59.468 "traddr": "10.0.0.2", 00:17:59.468 "trsvcid": "4420" 00:17:59.468 }, 00:17:59.468 "peer_address": { 00:17:59.468 "trtype": "TCP", 00:17:59.468 "adrfam": "IPv4", 00:17:59.468 "traddr": "10.0.0.1", 00:17:59.468 "trsvcid": "37714" 00:17:59.468 }, 00:17:59.468 "auth": { 00:17:59.468 "state": "completed", 00:17:59.468 "digest": "sha256", 00:17:59.468 "dhgroup": "ffdhe8192" 00:17:59.468 } 00:17:59.468 } 00:17:59.468 ]' 00:17:59.468 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.468 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.468 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.727 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.727 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.727 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.727 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.727 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.985 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.920 18:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.857 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.857 { 00:18:01.857 "cntlid": 43, 00:18:01.857 "qid": 0, 00:18:01.857 "state": "enabled", 00:18:01.857 "thread": "nvmf_tgt_poll_group_000", 00:18:01.857 "listen_address": { 00:18:01.857 "trtype": "TCP", 00:18:01.857 "adrfam": "IPv4", 00:18:01.857 "traddr": "10.0.0.2", 00:18:01.857 "trsvcid": "4420" 00:18:01.857 }, 00:18:01.857 "peer_address": { 00:18:01.857 "trtype": "TCP", 00:18:01.857 "adrfam": "IPv4", 00:18:01.857 "traddr": "10.0.0.1", 00:18:01.857 "trsvcid": "46590" 00:18:01.857 }, 00:18:01.857 "auth": { 00:18:01.857 "state": "completed", 00:18:01.857 "digest": "sha256", 00:18:01.857 "dhgroup": "ffdhe8192" 00:18:01.857 } 00:18:01.857 } 00:18:01.857 ]' 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.857 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.116 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.116 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.116 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.116 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.116 18:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.374 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:18:03.005 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.005 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:03.005 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.005 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.005 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.005 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.005 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.005 18:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.264 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.201 00:18:04.201 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.201 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.201 18:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.201 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.201 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.201 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.201 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.459 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.459 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.459 { 00:18:04.459 "cntlid": 45, 00:18:04.459 "qid": 0, 00:18:04.459 "state": "enabled", 00:18:04.459 "thread": "nvmf_tgt_poll_group_000", 00:18:04.459 "listen_address": { 00:18:04.459 "trtype": "TCP", 00:18:04.459 "adrfam": "IPv4", 00:18:04.459 "traddr": "10.0.0.2", 00:18:04.459 "trsvcid": "4420" 00:18:04.459 }, 00:18:04.459 "peer_address": { 00:18:04.459 "trtype": "TCP", 00:18:04.459 "adrfam": "IPv4", 00:18:04.459 "traddr": "10.0.0.1", 00:18:04.459 "trsvcid": "46618" 00:18:04.459 }, 00:18:04.459 "auth": { 00:18:04.459 "state": "completed", 00:18:04.459 "digest": "sha256", 00:18:04.459 "dhgroup": "ffdhe8192" 00:18:04.459 } 00:18:04.459 } 00:18:04.459 ]' 00:18:04.459 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.459 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.459 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.459 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.459 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.459 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.459 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.459 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.717 18:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.654 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.590 00:18:06.590 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.590 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.590 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.590 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.590 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.590 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.591 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.591 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.591 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.591 { 00:18:06.591 "cntlid": 47, 00:18:06.591 "qid": 0, 00:18:06.591 "state": "enabled", 00:18:06.591 "thread": "nvmf_tgt_poll_group_000", 00:18:06.591 "listen_address": { 00:18:06.591 "trtype": "TCP", 00:18:06.591 "adrfam": "IPv4", 00:18:06.591 "traddr": "10.0.0.2", 00:18:06.591 "trsvcid": "4420" 00:18:06.591 }, 00:18:06.591 "peer_address": { 00:18:06.591 "trtype": "TCP", 00:18:06.591 "adrfam": "IPv4", 00:18:06.591 "traddr": "10.0.0.1", 00:18:06.591 "trsvcid": "46654" 00:18:06.591 }, 00:18:06.591 "auth": { 00:18:06.591 "state": "completed", 00:18:06.591 "digest": "sha256", 00:18:06.591 "dhgroup": "ffdhe8192" 00:18:06.591 } 00:18:06.591 } 00:18:06.591 ]' 00:18:06.591 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.849 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.849 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.849 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.849 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.849 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.849 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.849 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.108 18:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.044 18:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.044 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.044 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.044 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.303 00:18:08.569 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.569 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.569 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.569 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.828 { 00:18:08.828 "cntlid": 49, 00:18:08.828 "qid": 0, 00:18:08.828 "state": "enabled", 00:18:08.828 "thread": "nvmf_tgt_poll_group_000", 00:18:08.828 "listen_address": { 00:18:08.828 "trtype": "TCP", 00:18:08.828 "adrfam": "IPv4", 00:18:08.828 "traddr": "10.0.0.2", 00:18:08.828 "trsvcid": "4420" 00:18:08.828 }, 00:18:08.828 "peer_address": { 00:18:08.828 "trtype": "TCP", 00:18:08.828 "adrfam": "IPv4", 00:18:08.828 "traddr": "10.0.0.1", 00:18:08.828 "trsvcid": "46666" 00:18:08.828 }, 00:18:08.828 "auth": { 00:18:08.828 "state": "completed", 00:18:08.828 "digest": "sha384", 00:18:08.828 "dhgroup": "null" 00:18:08.828 } 00:18:08.828 } 00:18:08.828 ]' 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.828 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.087 18:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:18:10.023 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.023 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:10.023 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.023 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.023 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.023 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.023 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.023 18:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.282 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.540 00:18:10.540 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.540 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.540 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.799 { 00:18:10.799 "cntlid": 51, 00:18:10.799 "qid": 0, 00:18:10.799 "state": "enabled", 00:18:10.799 "thread": "nvmf_tgt_poll_group_000", 00:18:10.799 "listen_address": { 00:18:10.799 "trtype": "TCP", 00:18:10.799 "adrfam": "IPv4", 00:18:10.799 "traddr": "10.0.0.2", 00:18:10.799 "trsvcid": "4420" 00:18:10.799 }, 00:18:10.799 "peer_address": { 00:18:10.799 "trtype": "TCP", 00:18:10.799 "adrfam": "IPv4", 00:18:10.799 "traddr": "10.0.0.1", 00:18:10.799 "trsvcid": "55904" 00:18:10.799 }, 00:18:10.799 "auth": { 00:18:10.799 "state": "completed", 00:18:10.799 "digest": "sha384", 00:18:10.799 "dhgroup": "null" 00:18:10.799 } 00:18:10.799 } 00:18:10.799 ]' 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.799 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.058 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.992 18:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.251 00:18:12.509 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.509 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.509 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.767 { 00:18:12.767 "cntlid": 53, 00:18:12.767 "qid": 0, 00:18:12.767 "state": "enabled", 00:18:12.767 "thread": "nvmf_tgt_poll_group_000", 00:18:12.767 "listen_address": { 00:18:12.767 "trtype": "TCP", 00:18:12.767 "adrfam": "IPv4", 00:18:12.767 "traddr": "10.0.0.2", 00:18:12.767 "trsvcid": "4420" 00:18:12.767 }, 00:18:12.767 "peer_address": { 00:18:12.767 "trtype": "TCP", 00:18:12.767 "adrfam": "IPv4", 00:18:12.767 "traddr": "10.0.0.1", 00:18:12.767 "trsvcid": "55942" 00:18:12.767 }, 00:18:12.767 "auth": { 00:18:12.767 "state": "completed", 00:18:12.767 "digest": "sha384", 00:18:12.767 "dhgroup": "null" 00:18:12.767 } 00:18:12.767 } 00:18:12.767 ]' 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.767 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.026 18:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.959 18:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.218 00:18:14.218 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.218 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.218 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.476 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.476 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.476 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.476 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.476 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.476 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.476 { 00:18:14.476 "cntlid": 55, 00:18:14.476 "qid": 0, 00:18:14.476 "state": "enabled", 00:18:14.476 "thread": "nvmf_tgt_poll_group_000", 00:18:14.476 "listen_address": { 00:18:14.476 "trtype": "TCP", 00:18:14.476 "adrfam": "IPv4", 00:18:14.476 "traddr": "10.0.0.2", 00:18:14.476 "trsvcid": "4420" 00:18:14.476 }, 00:18:14.476 "peer_address": { 00:18:14.476 "trtype": "TCP", 00:18:14.476 "adrfam": "IPv4", 00:18:14.476 "traddr": "10.0.0.1", 00:18:14.476 "trsvcid": "55966" 00:18:14.476 }, 00:18:14.476 "auth": { 00:18:14.476 "state": "completed", 00:18:14.476 "digest": "sha384", 00:18:14.476 "dhgroup": "null" 00:18:14.476 } 00:18:14.476 } 00:18:14.476 ]' 00:18:14.476 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.735 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.735 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.735 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:14.735 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.735 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.735 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.735 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.993 18:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.929 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.187 00:18:16.187 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.187 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.187 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.446 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.446 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.446 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.446 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.446 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.446 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.446 { 00:18:16.446 "cntlid": 57, 00:18:16.446 "qid": 0, 00:18:16.447 "state": "enabled", 00:18:16.447 "thread": "nvmf_tgt_poll_group_000", 00:18:16.447 "listen_address": { 00:18:16.447 "trtype": "TCP", 00:18:16.447 "adrfam": "IPv4", 00:18:16.447 "traddr": "10.0.0.2", 00:18:16.447 "trsvcid": "4420" 00:18:16.447 }, 00:18:16.447 "peer_address": { 00:18:16.447 "trtype": "TCP", 00:18:16.447 "adrfam": "IPv4", 00:18:16.447 "traddr": "10.0.0.1", 00:18:16.447 "trsvcid": "55998" 00:18:16.447 }, 00:18:16.447 "auth": { 00:18:16.447 "state": "completed", 00:18:16.447 "digest": "sha384", 00:18:16.447 "dhgroup": "ffdhe2048" 00:18:16.447 } 00:18:16.447 } 00:18:16.447 ]' 00:18:16.447 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.705 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.705 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.705 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.705 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.705 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.705 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.705 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.964 18:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.907 18:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.209 00:18:18.209 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.209 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.209 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.474 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.474 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.474 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.474 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.474 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.474 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.474 { 00:18:18.474 "cntlid": 59, 00:18:18.474 "qid": 0, 00:18:18.474 "state": "enabled", 00:18:18.474 "thread": "nvmf_tgt_poll_group_000", 00:18:18.474 "listen_address": { 00:18:18.474 "trtype": "TCP", 00:18:18.474 "adrfam": "IPv4", 00:18:18.474 "traddr": "10.0.0.2", 00:18:18.474 "trsvcid": "4420" 00:18:18.474 }, 00:18:18.474 "peer_address": { 00:18:18.474 "trtype": "TCP", 00:18:18.474 "adrfam": "IPv4", 00:18:18.474 "traddr": "10.0.0.1", 00:18:18.474 "trsvcid": "56030" 00:18:18.474 }, 00:18:18.474 "auth": { 00:18:18.474 "state": "completed", 00:18:18.474 "digest": "sha384", 00:18:18.474 "dhgroup": "ffdhe2048" 00:18:18.474 } 00:18:18.474 } 00:18:18.474 ]' 00:18:18.474 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.736 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.736 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.736 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.736 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.736 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.736 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.736 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.994 18:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:18:19.929 18:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.929 18:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:19.929 18:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.929 18:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.929 18:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.929 18:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.929 18:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.929 18:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.188 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.447 00:18:20.447 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.447 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.447 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.706 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.706 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.706 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.706 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.706 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.706 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.706 { 00:18:20.706 "cntlid": 61, 00:18:20.706 "qid": 0, 00:18:20.706 "state": "enabled", 00:18:20.706 "thread": "nvmf_tgt_poll_group_000", 00:18:20.706 "listen_address": { 00:18:20.706 "trtype": "TCP", 00:18:20.706 "adrfam": "IPv4", 00:18:20.706 "traddr": "10.0.0.2", 00:18:20.706 "trsvcid": "4420" 00:18:20.706 }, 00:18:20.706 "peer_address": { 00:18:20.706 "trtype": "TCP", 00:18:20.706 "adrfam": "IPv4", 00:18:20.706 "traddr": "10.0.0.1", 00:18:20.706 "trsvcid": "52178" 00:18:20.706 }, 00:18:20.706 "auth": { 00:18:20.706 "state": "completed", 00:18:20.706 "digest": "sha384", 00:18:20.706 "dhgroup": "ffdhe2048" 00:18:20.706 } 00:18:20.706 } 00:18:20.706 ]' 00:18:20.706 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.965 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.965 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.965 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.965 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.965 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.965 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.965 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.223 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:18:22.159 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.159 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:22.159 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.159 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.159 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.159 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.159 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.159 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.159 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:22.159 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.159 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.159 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:22.159 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.159 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.159 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:22.159 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.159 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.418 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.418 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.418 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.684 00:18:22.684 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.684 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.684 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.946 { 00:18:22.946 "cntlid": 63, 00:18:22.946 "qid": 0, 00:18:22.946 "state": "enabled", 00:18:22.946 "thread": "nvmf_tgt_poll_group_000", 00:18:22.946 "listen_address": { 00:18:22.946 "trtype": "TCP", 00:18:22.946 "adrfam": "IPv4", 00:18:22.946 "traddr": "10.0.0.2", 00:18:22.946 "trsvcid": "4420" 00:18:22.946 }, 00:18:22.946 "peer_address": { 00:18:22.946 "trtype": "TCP", 00:18:22.946 "adrfam": "IPv4", 00:18:22.946 "traddr": "10.0.0.1", 00:18:22.946 "trsvcid": "52202" 00:18:22.946 }, 00:18:22.946 "auth": { 00:18:22.946 "state": "completed", 00:18:22.946 "digest": "sha384", 00:18:22.946 "dhgroup": "ffdhe2048" 00:18:22.946 } 00:18:22.946 } 00:18:22.946 ]' 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.946 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.947 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.947 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.947 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.205 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:18:24.139 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.139 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:24.139 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.139 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.139 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.139 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.139 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.139 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.398 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.657 00:18:24.657 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.657 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.657 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.915 { 00:18:24.915 "cntlid": 65, 00:18:24.915 "qid": 0, 00:18:24.915 "state": "enabled", 00:18:24.915 "thread": "nvmf_tgt_poll_group_000", 00:18:24.915 "listen_address": { 00:18:24.915 "trtype": "TCP", 00:18:24.915 "adrfam": "IPv4", 00:18:24.915 "traddr": "10.0.0.2", 00:18:24.915 "trsvcid": "4420" 00:18:24.915 }, 00:18:24.915 "peer_address": { 00:18:24.915 "trtype": "TCP", 00:18:24.915 "adrfam": "IPv4", 00:18:24.915 "traddr": "10.0.0.1", 00:18:24.915 "trsvcid": "52234" 00:18:24.915 }, 00:18:24.915 "auth": { 00:18:24.915 "state": "completed", 00:18:24.915 "digest": "sha384", 00:18:24.915 "dhgroup": "ffdhe3072" 00:18:24.915 } 00:18:24.915 } 00:18:24.915 ]' 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.915 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.173 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.173 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.173 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.432 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:18:25.999 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.999 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:25.999 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.999 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.999 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.999 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.999 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.999 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.257 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.516 00:18:26.516 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.516 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.516 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.775 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.775 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.775 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.775 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.775 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.775 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.776 { 00:18:26.776 "cntlid": 67, 00:18:26.776 "qid": 0, 00:18:26.776 "state": "enabled", 00:18:26.776 "thread": "nvmf_tgt_poll_group_000", 00:18:26.776 "listen_address": { 00:18:26.776 "trtype": "TCP", 00:18:26.776 "adrfam": "IPv4", 00:18:26.776 "traddr": "10.0.0.2", 00:18:26.776 "trsvcid": "4420" 00:18:26.776 }, 00:18:26.776 "peer_address": { 00:18:26.776 "trtype": "TCP", 00:18:26.776 "adrfam": "IPv4", 00:18:26.776 "traddr": "10.0.0.1", 00:18:26.776 "trsvcid": "52266" 00:18:26.776 }, 00:18:26.776 "auth": { 00:18:26.776 "state": "completed", 00:18:26.776 "digest": "sha384", 00:18:26.776 "dhgroup": "ffdhe3072" 00:18:26.776 } 00:18:26.776 } 00:18:26.776 ]' 00:18:26.776 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.776 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.776 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.776 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.776 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.034 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.034 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.034 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.293 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:18:27.860 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.125 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:28.125 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.125 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.125 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.125 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.125 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.125 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.384 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.642 00:18:28.642 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.642 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.642 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.901 { 00:18:28.901 "cntlid": 69, 00:18:28.901 "qid": 0, 00:18:28.901 "state": "enabled", 00:18:28.901 "thread": "nvmf_tgt_poll_group_000", 00:18:28.901 "listen_address": { 00:18:28.901 "trtype": "TCP", 00:18:28.901 "adrfam": "IPv4", 00:18:28.901 "traddr": "10.0.0.2", 00:18:28.901 "trsvcid": "4420" 00:18:28.901 }, 00:18:28.901 "peer_address": { 00:18:28.901 "trtype": "TCP", 00:18:28.901 "adrfam": "IPv4", 00:18:28.901 "traddr": "10.0.0.1", 00:18:28.901 "trsvcid": "52294" 00:18:28.901 }, 00:18:28.901 "auth": { 00:18:28.901 "state": "completed", 00:18:28.901 "digest": "sha384", 00:18:28.901 "dhgroup": "ffdhe3072" 00:18:28.901 } 00:18:28.901 } 00:18:28.901 ]' 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.901 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.160 18:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.537 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.795 00:18:30.795 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.795 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.795 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.053 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.054 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.054 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.054 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.054 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.054 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.054 { 00:18:31.054 "cntlid": 71, 00:18:31.054 "qid": 0, 00:18:31.054 "state": "enabled", 00:18:31.054 "thread": "nvmf_tgt_poll_group_000", 00:18:31.054 "listen_address": { 00:18:31.054 "trtype": "TCP", 00:18:31.054 "adrfam": "IPv4", 00:18:31.054 "traddr": "10.0.0.2", 00:18:31.054 "trsvcid": "4420" 00:18:31.054 }, 00:18:31.054 "peer_address": { 00:18:31.054 "trtype": "TCP", 00:18:31.054 "adrfam": "IPv4", 00:18:31.054 "traddr": "10.0.0.1", 00:18:31.054 "trsvcid": "37808" 00:18:31.054 }, 00:18:31.054 "auth": { 00:18:31.054 "state": "completed", 00:18:31.054 "digest": "sha384", 00:18:31.054 "dhgroup": "ffdhe3072" 00:18:31.054 } 00:18:31.054 } 00:18:31.054 ]' 00:18:31.054 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.054 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.054 18:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.054 18:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:31.054 18:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.054 18:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.054 18:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.054 18:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.312 18:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:18:32.248 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.248 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:32.248 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.248 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.248 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.248 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.248 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.248 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:32.248 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.508 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.766 00:18:32.766 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.766 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.766 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.067 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.067 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.067 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.067 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.067 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.067 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.067 { 00:18:33.067 "cntlid": 73, 00:18:33.067 "qid": 0, 00:18:33.067 "state": "enabled", 00:18:33.067 "thread": "nvmf_tgt_poll_group_000", 00:18:33.067 "listen_address": { 00:18:33.067 "trtype": "TCP", 00:18:33.067 "adrfam": "IPv4", 00:18:33.067 "traddr": "10.0.0.2", 00:18:33.067 "trsvcid": "4420" 00:18:33.067 }, 00:18:33.067 "peer_address": { 00:18:33.067 "trtype": "TCP", 00:18:33.067 "adrfam": "IPv4", 00:18:33.067 "traddr": "10.0.0.1", 00:18:33.067 "trsvcid": "37832" 00:18:33.067 }, 00:18:33.067 "auth": { 00:18:33.067 "state": "completed", 00:18:33.067 "digest": "sha384", 00:18:33.067 "dhgroup": "ffdhe4096" 00:18:33.067 } 00:18:33.067 } 00:18:33.067 ]' 00:18:33.067 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.067 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.067 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.326 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.326 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.326 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.326 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.326 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.585 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.523 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.091 00:18:35.091 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.091 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.091 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.350 { 00:18:35.350 "cntlid": 75, 00:18:35.350 "qid": 0, 00:18:35.350 "state": "enabled", 00:18:35.350 "thread": "nvmf_tgt_poll_group_000", 00:18:35.350 "listen_address": { 00:18:35.350 "trtype": "TCP", 00:18:35.350 "adrfam": "IPv4", 00:18:35.350 "traddr": "10.0.0.2", 00:18:35.350 "trsvcid": "4420" 00:18:35.350 }, 00:18:35.350 "peer_address": { 00:18:35.350 "trtype": "TCP", 00:18:35.350 "adrfam": "IPv4", 00:18:35.350 "traddr": "10.0.0.1", 00:18:35.350 "trsvcid": "37860" 00:18:35.350 }, 00:18:35.350 "auth": { 00:18:35.350 "state": "completed", 00:18:35.350 "digest": "sha384", 00:18:35.350 "dhgroup": "ffdhe4096" 00:18:35.350 } 00:18:35.350 } 00:18:35.350 ]' 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.350 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.608 18:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:18:36.540 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.799 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:36.799 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.799 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.799 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.799 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.799 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.799 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.058 18:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.316 00:18:37.316 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.316 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.316 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.575 { 00:18:37.575 "cntlid": 77, 00:18:37.575 "qid": 0, 00:18:37.575 "state": "enabled", 00:18:37.575 "thread": "nvmf_tgt_poll_group_000", 00:18:37.575 "listen_address": { 00:18:37.575 "trtype": "TCP", 00:18:37.575 "adrfam": "IPv4", 00:18:37.575 "traddr": "10.0.0.2", 00:18:37.575 "trsvcid": "4420" 00:18:37.575 }, 00:18:37.575 "peer_address": { 00:18:37.575 "trtype": "TCP", 00:18:37.575 "adrfam": "IPv4", 00:18:37.575 "traddr": "10.0.0.1", 00:18:37.575 "trsvcid": "37882" 00:18:37.575 }, 00:18:37.575 "auth": { 00:18:37.575 "state": "completed", 00:18:37.575 "digest": "sha384", 00:18:37.575 "dhgroup": "ffdhe4096" 00:18:37.575 } 00:18:37.575 } 00:18:37.575 ]' 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.575 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.833 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.833 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.833 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.091 18:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:18:38.658 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.658 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:38.658 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.658 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.658 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.658 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.658 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.658 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.916 18:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.483 00:18:39.483 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.483 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.483 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.741 { 00:18:39.741 "cntlid": 79, 00:18:39.741 "qid": 0, 00:18:39.741 "state": "enabled", 00:18:39.741 "thread": "nvmf_tgt_poll_group_000", 00:18:39.741 "listen_address": { 00:18:39.741 "trtype": "TCP", 00:18:39.741 "adrfam": "IPv4", 00:18:39.741 "traddr": "10.0.0.2", 00:18:39.741 "trsvcid": "4420" 00:18:39.741 }, 00:18:39.741 "peer_address": { 00:18:39.741 "trtype": "TCP", 00:18:39.741 "adrfam": "IPv4", 00:18:39.741 "traddr": "10.0.0.1", 00:18:39.741 "trsvcid": "59248" 00:18:39.741 }, 00:18:39.741 "auth": { 00:18:39.741 "state": "completed", 00:18:39.741 "digest": "sha384", 00:18:39.741 "dhgroup": "ffdhe4096" 00:18:39.741 } 00:18:39.741 } 00:18:39.741 ]' 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.741 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.999 18:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.375 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.942 00:18:41.942 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.942 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.942 18:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.200 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.200 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.200 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.200 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.200 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.200 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.200 { 00:18:42.200 "cntlid": 81, 00:18:42.200 "qid": 0, 00:18:42.200 "state": "enabled", 00:18:42.200 "thread": "nvmf_tgt_poll_group_000", 00:18:42.200 "listen_address": { 00:18:42.200 "trtype": "TCP", 00:18:42.200 "adrfam": "IPv4", 00:18:42.200 "traddr": "10.0.0.2", 00:18:42.200 "trsvcid": "4420" 00:18:42.200 }, 00:18:42.200 "peer_address": { 00:18:42.200 "trtype": "TCP", 00:18:42.200 "adrfam": "IPv4", 00:18:42.200 "traddr": "10.0.0.1", 00:18:42.200 "trsvcid": "59264" 00:18:42.200 }, 00:18:42.200 "auth": { 00:18:42.200 "state": "completed", 00:18:42.200 "digest": "sha384", 00:18:42.200 "dhgroup": "ffdhe6144" 00:18:42.200 } 00:18:42.200 } 00:18:42.200 ]' 00:18:42.200 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.200 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.200 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.458 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.458 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.458 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.458 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.458 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.717 18:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:18:43.652 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.652 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:43.652 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.652 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.652 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.652 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.652 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.652 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.911 18:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.479 00:18:44.479 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.479 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.479 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.479 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.479 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.479 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.479 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.479 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.479 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.479 { 00:18:44.479 "cntlid": 83, 00:18:44.479 "qid": 0, 00:18:44.479 "state": "enabled", 00:18:44.479 "thread": "nvmf_tgt_poll_group_000", 00:18:44.479 "listen_address": { 00:18:44.479 "trtype": "TCP", 00:18:44.479 "adrfam": "IPv4", 00:18:44.479 "traddr": "10.0.0.2", 00:18:44.479 "trsvcid": "4420" 00:18:44.479 }, 00:18:44.479 "peer_address": { 00:18:44.479 "trtype": "TCP", 00:18:44.479 "adrfam": "IPv4", 00:18:44.479 "traddr": "10.0.0.1", 00:18:44.479 "trsvcid": "59302" 00:18:44.479 }, 00:18:44.479 "auth": { 00:18:44.479 "state": "completed", 00:18:44.479 "digest": "sha384", 00:18:44.479 "dhgroup": "ffdhe6144" 00:18:44.479 } 00:18:44.479 } 00:18:44.479 ]' 00:18:44.479 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.737 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.737 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.737 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.737 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.737 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.737 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.737 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.996 18:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.933 18:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.500 00:18:46.500 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.500 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.500 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.759 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.759 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.759 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.759 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.759 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.759 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.759 { 00:18:46.759 "cntlid": 85, 00:18:46.759 "qid": 0, 00:18:46.759 "state": "enabled", 00:18:46.759 "thread": "nvmf_tgt_poll_group_000", 00:18:46.759 "listen_address": { 00:18:46.759 "trtype": "TCP", 00:18:46.759 "adrfam": "IPv4", 00:18:46.759 "traddr": "10.0.0.2", 00:18:46.759 "trsvcid": "4420" 00:18:46.759 }, 00:18:46.759 "peer_address": { 00:18:46.759 "trtype": "TCP", 00:18:46.759 "adrfam": "IPv4", 00:18:46.759 "traddr": "10.0.0.1", 00:18:46.759 "trsvcid": "59322" 00:18:46.759 }, 00:18:46.759 "auth": { 00:18:46.759 "state": "completed", 00:18:46.759 "digest": "sha384", 00:18:46.759 "dhgroup": "ffdhe6144" 00:18:46.759 } 00:18:46.759 } 00:18:46.759 ]' 00:18:46.759 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.759 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.759 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.017 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.017 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.017 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.017 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.017 18:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.275 18:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:18:48.250 18:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.250 18:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:48.250 18:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.250 18:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.250 18:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.250 18:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.250 18:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.250 18:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.250 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.832 00:18:48.832 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.832 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.832 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.090 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.090 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.090 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.090 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.090 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.090 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.090 { 00:18:49.090 "cntlid": 87, 00:18:49.090 "qid": 0, 00:18:49.090 "state": "enabled", 00:18:49.090 "thread": "nvmf_tgt_poll_group_000", 00:18:49.090 "listen_address": { 00:18:49.090 "trtype": "TCP", 00:18:49.090 "adrfam": "IPv4", 00:18:49.090 "traddr": "10.0.0.2", 00:18:49.090 "trsvcid": "4420" 00:18:49.090 }, 00:18:49.091 "peer_address": { 00:18:49.091 "trtype": "TCP", 00:18:49.091 "adrfam": "IPv4", 00:18:49.091 "traddr": "10.0.0.1", 00:18:49.091 "trsvcid": "59342" 00:18:49.091 }, 00:18:49.091 "auth": { 00:18:49.091 "state": "completed", 00:18:49.091 "digest": "sha384", 00:18:49.091 "dhgroup": "ffdhe6144" 00:18:49.091 } 00:18:49.091 } 00:18:49.091 ]' 00:18:49.091 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.091 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.091 18:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.091 18:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.091 18:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.091 18:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.091 18:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.091 18:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.348 18:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:18:50.285 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.285 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:50.285 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.285 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.285 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.285 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.285 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.285 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.285 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.544 18:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.110 00:18:51.368 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.368 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.368 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.627 { 00:18:51.627 "cntlid": 89, 00:18:51.627 "qid": 0, 00:18:51.627 "state": "enabled", 00:18:51.627 "thread": "nvmf_tgt_poll_group_000", 00:18:51.627 "listen_address": { 00:18:51.627 "trtype": "TCP", 00:18:51.627 "adrfam": "IPv4", 00:18:51.627 "traddr": "10.0.0.2", 00:18:51.627 "trsvcid": "4420" 00:18:51.627 }, 00:18:51.627 "peer_address": { 00:18:51.627 "trtype": "TCP", 00:18:51.627 "adrfam": "IPv4", 00:18:51.627 "traddr": "10.0.0.1", 00:18:51.627 "trsvcid": "53830" 00:18:51.627 }, 00:18:51.627 "auth": { 00:18:51.627 "state": "completed", 00:18:51.627 "digest": "sha384", 00:18:51.627 "dhgroup": "ffdhe8192" 00:18:51.627 } 00:18:51.627 } 00:18:51.627 ]' 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.627 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.886 18:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:18:52.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:52.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.821 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.080 18:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.647 00:18:53.647 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.647 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.647 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.905 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.905 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.905 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.905 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.905 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.905 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.905 { 00:18:53.905 "cntlid": 91, 00:18:53.905 "qid": 0, 00:18:53.905 "state": "enabled", 00:18:53.905 "thread": "nvmf_tgt_poll_group_000", 00:18:53.905 "listen_address": { 00:18:53.905 "trtype": "TCP", 00:18:53.905 "adrfam": "IPv4", 00:18:53.905 "traddr": "10.0.0.2", 00:18:53.905 "trsvcid": "4420" 00:18:53.905 }, 00:18:53.905 "peer_address": { 00:18:53.905 "trtype": "TCP", 00:18:53.905 "adrfam": "IPv4", 00:18:53.905 "traddr": "10.0.0.1", 00:18:53.905 "trsvcid": "53846" 00:18:53.905 }, 00:18:53.905 "auth": { 00:18:53.905 "state": "completed", 00:18:53.905 "digest": "sha384", 00:18:53.905 "dhgroup": "ffdhe8192" 00:18:53.905 } 00:18:53.905 } 00:18:53.905 ]' 00:18:53.905 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.164 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.164 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.164 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.164 18:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.164 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.164 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.164 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.423 18:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:18:55.359 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.359 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:55.359 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.359 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.359 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.359 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.359 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.359 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.618 18:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.555 00:18:56.555 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.555 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.555 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.814 { 00:18:56.814 "cntlid": 93, 00:18:56.814 "qid": 0, 00:18:56.814 "state": "enabled", 00:18:56.814 "thread": "nvmf_tgt_poll_group_000", 00:18:56.814 "listen_address": { 00:18:56.814 "trtype": "TCP", 00:18:56.814 "adrfam": "IPv4", 00:18:56.814 "traddr": "10.0.0.2", 00:18:56.814 "trsvcid": "4420" 00:18:56.814 }, 00:18:56.814 "peer_address": { 00:18:56.814 "trtype": "TCP", 00:18:56.814 "adrfam": "IPv4", 00:18:56.814 "traddr": "10.0.0.1", 00:18:56.814 "trsvcid": "53862" 00:18:56.814 }, 00:18:56.814 "auth": { 00:18:56.814 "state": "completed", 00:18:56.814 "digest": "sha384", 00:18:56.814 "dhgroup": "ffdhe8192" 00:18:56.814 } 00:18:56.814 } 00:18:56.814 ]' 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.814 18:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.073 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:18:58.009 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.009 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:58.009 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.009 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.009 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.009 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.009 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.009 18:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.266 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:58.266 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.266 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.266 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.266 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.266 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.266 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:58.266 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.266 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.267 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.267 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.267 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.835 00:18:58.835 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.835 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.093 18:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.351 { 00:18:59.351 "cntlid": 95, 00:18:59.351 "qid": 0, 00:18:59.351 "state": "enabled", 00:18:59.351 "thread": "nvmf_tgt_poll_group_000", 00:18:59.351 "listen_address": { 00:18:59.351 "trtype": "TCP", 00:18:59.351 "adrfam": "IPv4", 00:18:59.351 "traddr": "10.0.0.2", 00:18:59.351 "trsvcid": "4420" 00:18:59.351 }, 00:18:59.351 "peer_address": { 00:18:59.351 "trtype": "TCP", 00:18:59.351 "adrfam": "IPv4", 00:18:59.351 "traddr": "10.0.0.1", 00:18:59.351 "trsvcid": "53902" 00:18:59.351 }, 00:18:59.351 "auth": { 00:18:59.351 "state": "completed", 00:18:59.351 "digest": "sha384", 00:18:59.351 "dhgroup": "ffdhe8192" 00:18:59.351 } 00:18:59.351 } 00:18:59.351 ]' 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.351 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.609 18:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:19:00.545 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.545 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:00.545 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.546 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.546 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.546 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:00.546 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.546 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.546 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.546 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.804 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.062 00:19:01.062 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.062 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.062 18:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.320 { 00:19:01.320 "cntlid": 97, 00:19:01.320 "qid": 0, 00:19:01.320 "state": "enabled", 00:19:01.320 "thread": "nvmf_tgt_poll_group_000", 00:19:01.320 "listen_address": { 00:19:01.320 "trtype": "TCP", 00:19:01.320 "adrfam": "IPv4", 00:19:01.320 "traddr": "10.0.0.2", 00:19:01.320 "trsvcid": "4420" 00:19:01.320 }, 00:19:01.320 "peer_address": { 00:19:01.320 "trtype": "TCP", 00:19:01.320 "adrfam": "IPv4", 00:19:01.320 "traddr": "10.0.0.1", 00:19:01.320 "trsvcid": "48240" 00:19:01.320 }, 00:19:01.320 "auth": { 00:19:01.320 "state": "completed", 00:19:01.320 "digest": "sha512", 00:19:01.320 "dhgroup": "null" 00:19:01.320 } 00:19:01.320 } 00:19:01.320 ]' 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.320 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.579 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.579 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.579 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.837 18:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:19:02.403 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.403 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:02.403 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.403 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.403 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.403 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.403 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.403 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.662 18:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.311 00:19:03.311 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.311 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.311 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.570 { 00:19:03.570 "cntlid": 99, 00:19:03.570 "qid": 0, 00:19:03.570 "state": "enabled", 00:19:03.570 "thread": "nvmf_tgt_poll_group_000", 00:19:03.570 "listen_address": { 00:19:03.570 "trtype": "TCP", 00:19:03.570 "adrfam": "IPv4", 00:19:03.570 "traddr": "10.0.0.2", 00:19:03.570 "trsvcid": "4420" 00:19:03.570 }, 00:19:03.570 "peer_address": { 00:19:03.570 "trtype": "TCP", 00:19:03.570 "adrfam": "IPv4", 00:19:03.570 "traddr": "10.0.0.1", 00:19:03.570 "trsvcid": "48256" 00:19:03.570 }, 00:19:03.570 "auth": { 00:19:03.570 "state": "completed", 00:19:03.570 "digest": "sha512", 00:19:03.570 "dhgroup": "null" 00:19:03.570 } 00:19:03.570 } 00:19:03.570 ]' 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.570 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.828 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.828 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.828 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.087 18:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:19:04.654 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.654 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:04.654 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.654 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.654 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.654 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.654 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.654 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.913 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.479 00:19:05.479 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.479 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.479 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.737 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.737 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.737 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.737 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.737 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.738 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.738 { 00:19:05.738 "cntlid": 101, 00:19:05.738 "qid": 0, 00:19:05.738 "state": "enabled", 00:19:05.738 "thread": "nvmf_tgt_poll_group_000", 00:19:05.738 "listen_address": { 00:19:05.738 "trtype": "TCP", 00:19:05.738 "adrfam": "IPv4", 00:19:05.738 "traddr": "10.0.0.2", 00:19:05.738 "trsvcid": "4420" 00:19:05.738 }, 00:19:05.738 "peer_address": { 00:19:05.738 "trtype": "TCP", 00:19:05.738 "adrfam": "IPv4", 00:19:05.738 "traddr": "10.0.0.1", 00:19:05.738 "trsvcid": "48286" 00:19:05.738 }, 00:19:05.738 "auth": { 00:19:05.738 "state": "completed", 00:19:05.738 "digest": "sha512", 00:19:05.738 "dhgroup": "null" 00:19:05.738 } 00:19:05.738 } 00:19:05.738 ]' 00:19:05.738 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.738 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.997 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.997 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:05.997 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.997 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.997 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.997 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.255 18:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:19:07.192 18:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.192 18:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:07.192 18:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.192 18:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.192 18:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.192 18:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.192 18:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.192 18:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.192 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.759 00:19:07.759 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.759 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.759 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.018 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.018 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.018 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.018 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.018 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.018 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.018 { 00:19:08.018 "cntlid": 103, 00:19:08.018 "qid": 0, 00:19:08.018 "state": "enabled", 00:19:08.018 "thread": "nvmf_tgt_poll_group_000", 00:19:08.018 "listen_address": { 00:19:08.018 "trtype": "TCP", 00:19:08.018 "adrfam": "IPv4", 00:19:08.018 "traddr": "10.0.0.2", 00:19:08.018 "trsvcid": "4420" 00:19:08.018 }, 00:19:08.018 "peer_address": { 00:19:08.018 "trtype": "TCP", 00:19:08.018 "adrfam": "IPv4", 00:19:08.018 "traddr": "10.0.0.1", 00:19:08.018 "trsvcid": "48304" 00:19:08.018 }, 00:19:08.018 "auth": { 00:19:08.018 "state": "completed", 00:19:08.018 "digest": "sha512", 00:19:08.018 "dhgroup": "null" 00:19:08.018 } 00:19:08.018 } 00:19:08.018 ]' 00:19:08.018 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.018 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.018 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.276 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:08.277 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.277 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.277 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.277 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.534 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:19:09.469 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.469 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:09.469 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.469 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.469 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.469 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.469 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.469 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.469 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.732 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:09.732 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.732 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.732 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.732 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.732 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.732 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.732 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.733 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.733 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.733 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.733 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.991 00:19:09.991 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.991 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.991 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.250 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.250 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.250 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.250 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.250 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.250 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.250 { 00:19:10.250 "cntlid": 105, 00:19:10.250 "qid": 0, 00:19:10.250 "state": "enabled", 00:19:10.250 "thread": "nvmf_tgt_poll_group_000", 00:19:10.250 "listen_address": { 00:19:10.250 "trtype": "TCP", 00:19:10.250 "adrfam": "IPv4", 00:19:10.250 "traddr": "10.0.0.2", 00:19:10.250 "trsvcid": "4420" 00:19:10.250 }, 00:19:10.250 "peer_address": { 00:19:10.250 "trtype": "TCP", 00:19:10.250 "adrfam": "IPv4", 00:19:10.250 "traddr": "10.0.0.1", 00:19:10.250 "trsvcid": "55980" 00:19:10.250 }, 00:19:10.250 "auth": { 00:19:10.250 "state": "completed", 00:19:10.250 "digest": "sha512", 00:19:10.250 "dhgroup": "ffdhe2048" 00:19:10.250 } 00:19:10.250 } 00:19:10.250 ]' 00:19:10.509 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.509 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.509 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.509 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.509 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.509 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.509 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.509 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.767 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:19:11.704 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.704 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:11.704 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.704 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.704 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.704 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.704 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.704 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.704 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.963 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.221 00:19:12.221 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.221 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.221 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.479 { 00:19:12.479 "cntlid": 107, 00:19:12.479 "qid": 0, 00:19:12.479 "state": "enabled", 00:19:12.479 "thread": "nvmf_tgt_poll_group_000", 00:19:12.479 "listen_address": { 00:19:12.479 "trtype": "TCP", 00:19:12.479 "adrfam": "IPv4", 00:19:12.479 "traddr": "10.0.0.2", 00:19:12.479 "trsvcid": "4420" 00:19:12.479 }, 00:19:12.479 "peer_address": { 00:19:12.479 "trtype": "TCP", 00:19:12.479 "adrfam": "IPv4", 00:19:12.479 "traddr": "10.0.0.1", 00:19:12.479 "trsvcid": "56004" 00:19:12.479 }, 00:19:12.479 "auth": { 00:19:12.479 "state": "completed", 00:19:12.479 "digest": "sha512", 00:19:12.479 "dhgroup": "ffdhe2048" 00:19:12.479 } 00:19:12.479 } 00:19:12.479 ]' 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.479 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.738 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:19:13.674 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.674 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:13.674 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.674 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.674 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.674 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.674 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.674 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.937 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.196 00:19:14.196 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.196 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.196 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.454 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.454 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.454 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.454 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.454 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.454 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.454 { 00:19:14.454 "cntlid": 109, 00:19:14.454 "qid": 0, 00:19:14.454 "state": "enabled", 00:19:14.454 "thread": "nvmf_tgt_poll_group_000", 00:19:14.454 "listen_address": { 00:19:14.454 "trtype": "TCP", 00:19:14.454 "adrfam": "IPv4", 00:19:14.454 "traddr": "10.0.0.2", 00:19:14.454 "trsvcid": "4420" 00:19:14.454 }, 00:19:14.454 "peer_address": { 00:19:14.454 "trtype": "TCP", 00:19:14.454 "adrfam": "IPv4", 00:19:14.454 "traddr": "10.0.0.1", 00:19:14.454 "trsvcid": "56034" 00:19:14.454 }, 00:19:14.454 "auth": { 00:19:14.454 "state": "completed", 00:19:14.454 "digest": "sha512", 00:19:14.454 "dhgroup": "ffdhe2048" 00:19:14.454 } 00:19:14.454 } 00:19:14.454 ]' 00:19:14.454 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.454 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.454 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.714 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.714 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.714 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.714 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.714 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.973 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:19:15.909 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.910 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.168 00:19:16.168 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.168 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.168 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.427 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.427 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.427 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.427 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.427 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.427 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.427 { 00:19:16.427 "cntlid": 111, 00:19:16.427 "qid": 0, 00:19:16.427 "state": "enabled", 00:19:16.427 "thread": "nvmf_tgt_poll_group_000", 00:19:16.427 "listen_address": { 00:19:16.427 "trtype": "TCP", 00:19:16.427 "adrfam": "IPv4", 00:19:16.427 "traddr": "10.0.0.2", 00:19:16.427 "trsvcid": "4420" 00:19:16.427 }, 00:19:16.427 "peer_address": { 00:19:16.427 "trtype": "TCP", 00:19:16.427 "adrfam": "IPv4", 00:19:16.427 "traddr": "10.0.0.1", 00:19:16.427 "trsvcid": "56064" 00:19:16.427 }, 00:19:16.427 "auth": { 00:19:16.427 "state": "completed", 00:19:16.427 "digest": "sha512", 00:19:16.427 "dhgroup": "ffdhe2048" 00:19:16.427 } 00:19:16.427 } 00:19:16.427 ]' 00:19:16.687 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.687 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.687 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.687 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.687 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.687 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.687 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.687 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.946 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.882 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.485 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.485 { 00:19:18.485 "cntlid": 113, 00:19:18.485 "qid": 0, 00:19:18.485 "state": "enabled", 00:19:18.485 "thread": "nvmf_tgt_poll_group_000", 00:19:18.485 "listen_address": { 00:19:18.485 "trtype": "TCP", 00:19:18.485 "adrfam": "IPv4", 00:19:18.485 "traddr": "10.0.0.2", 00:19:18.485 "trsvcid": "4420" 00:19:18.485 }, 00:19:18.485 "peer_address": { 00:19:18.485 "trtype": "TCP", 00:19:18.485 "adrfam": "IPv4", 00:19:18.485 "traddr": "10.0.0.1", 00:19:18.485 "trsvcid": "56088" 00:19:18.485 }, 00:19:18.485 "auth": { 00:19:18.485 "state": "completed", 00:19:18.485 "digest": "sha512", 00:19:18.485 "dhgroup": "ffdhe3072" 00:19:18.485 } 00:19:18.485 } 00:19:18.485 ]' 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.485 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.742 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.742 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.742 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.742 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.742 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.001 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.938 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.197 00:19:20.456 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.456 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.456 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.715 { 00:19:20.715 "cntlid": 115, 00:19:20.715 "qid": 0, 00:19:20.715 "state": "enabled", 00:19:20.715 "thread": "nvmf_tgt_poll_group_000", 00:19:20.715 "listen_address": { 00:19:20.715 "trtype": "TCP", 00:19:20.715 "adrfam": "IPv4", 00:19:20.715 "traddr": "10.0.0.2", 00:19:20.715 "trsvcid": "4420" 00:19:20.715 }, 00:19:20.715 "peer_address": { 00:19:20.715 "trtype": "TCP", 00:19:20.715 "adrfam": "IPv4", 00:19:20.715 "traddr": "10.0.0.1", 00:19:20.715 "trsvcid": "46768" 00:19:20.715 }, 00:19:20.715 "auth": { 00:19:20.715 "state": "completed", 00:19:20.715 "digest": "sha512", 00:19:20.715 "dhgroup": "ffdhe3072" 00:19:20.715 } 00:19:20.715 } 00:19:20.715 ]' 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.715 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.973 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:19:21.908 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.908 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:21.908 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.908 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.908 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.908 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.908 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:21.908 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.166 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.424 00:19:22.424 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.424 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.424 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.682 { 00:19:22.682 "cntlid": 117, 00:19:22.682 "qid": 0, 00:19:22.682 "state": "enabled", 00:19:22.682 "thread": "nvmf_tgt_poll_group_000", 00:19:22.682 "listen_address": { 00:19:22.682 "trtype": "TCP", 00:19:22.682 "adrfam": "IPv4", 00:19:22.682 "traddr": "10.0.0.2", 00:19:22.682 "trsvcid": "4420" 00:19:22.682 }, 00:19:22.682 "peer_address": { 00:19:22.682 "trtype": "TCP", 00:19:22.682 "adrfam": "IPv4", 00:19:22.682 "traddr": "10.0.0.1", 00:19:22.682 "trsvcid": "46790" 00:19:22.682 }, 00:19:22.682 "auth": { 00:19:22.682 "state": "completed", 00:19:22.682 "digest": "sha512", 00:19:22.682 "dhgroup": "ffdhe3072" 00:19:22.682 } 00:19:22.682 } 00:19:22.682 ]' 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.682 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.941 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.941 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.941 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.199 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:19:24.135 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.393 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.394 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.652 00:19:24.911 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.911 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.911 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.170 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.170 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.170 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.170 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.170 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.170 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.170 { 00:19:25.170 "cntlid": 119, 00:19:25.170 "qid": 0, 00:19:25.170 "state": "enabled", 00:19:25.170 "thread": "nvmf_tgt_poll_group_000", 00:19:25.170 "listen_address": { 00:19:25.170 "trtype": "TCP", 00:19:25.170 "adrfam": "IPv4", 00:19:25.170 "traddr": "10.0.0.2", 00:19:25.170 "trsvcid": "4420" 00:19:25.170 }, 00:19:25.170 "peer_address": { 00:19:25.170 "trtype": "TCP", 00:19:25.170 "adrfam": "IPv4", 00:19:25.170 "traddr": "10.0.0.1", 00:19:25.170 "trsvcid": "46820" 00:19:25.170 }, 00:19:25.170 "auth": { 00:19:25.170 "state": "completed", 00:19:25.170 "digest": "sha512", 00:19:25.170 "dhgroup": "ffdhe3072" 00:19:25.170 } 00:19:25.170 } 00:19:25.170 ]' 00:19:25.170 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.170 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.170 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.170 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.170 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.170 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.170 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.170 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.427 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:19:26.363 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.363 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:26.363 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.363 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.363 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.363 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.363 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.363 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.363 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.622 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.189 00:19:27.189 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.189 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.189 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.448 { 00:19:27.448 "cntlid": 121, 00:19:27.448 "qid": 0, 00:19:27.448 "state": "enabled", 00:19:27.448 "thread": "nvmf_tgt_poll_group_000", 00:19:27.448 "listen_address": { 00:19:27.448 "trtype": "TCP", 00:19:27.448 "adrfam": "IPv4", 00:19:27.448 "traddr": "10.0.0.2", 00:19:27.448 "trsvcid": "4420" 00:19:27.448 }, 00:19:27.448 "peer_address": { 00:19:27.448 "trtype": "TCP", 00:19:27.448 "adrfam": "IPv4", 00:19:27.448 "traddr": "10.0.0.1", 00:19:27.448 "trsvcid": "46850" 00:19:27.448 }, 00:19:27.448 "auth": { 00:19:27.448 "state": "completed", 00:19:27.448 "digest": "sha512", 00:19:27.448 "dhgroup": "ffdhe4096" 00:19:27.448 } 00:19:27.448 } 00:19:27.448 ]' 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.448 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.707 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:19:29.085 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.085 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:29.085 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.085 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.085 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.085 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.085 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.085 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.344 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.603 00:19:29.603 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.603 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.603 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.898 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.899 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.899 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.899 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.899 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.899 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.899 { 00:19:29.899 "cntlid": 123, 00:19:29.899 "qid": 0, 00:19:29.899 "state": "enabled", 00:19:29.899 "thread": "nvmf_tgt_poll_group_000", 00:19:29.899 "listen_address": { 00:19:29.899 "trtype": "TCP", 00:19:29.899 "adrfam": "IPv4", 00:19:29.899 "traddr": "10.0.0.2", 00:19:29.899 "trsvcid": "4420" 00:19:29.899 }, 00:19:29.899 "peer_address": { 00:19:29.899 "trtype": "TCP", 00:19:29.899 "adrfam": "IPv4", 00:19:29.899 "traddr": "10.0.0.1", 00:19:29.899 "trsvcid": "39796" 00:19:29.899 }, 00:19:29.899 "auth": { 00:19:29.899 "state": "completed", 00:19:29.899 "digest": "sha512", 00:19:29.899 "dhgroup": "ffdhe4096" 00:19:29.899 } 00:19:29.899 } 00:19:29.899 ]' 00:19:29.899 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.899 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.899 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.899 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.899 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.157 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.157 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.157 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.416 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:19:30.981 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.981 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:30.981 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.982 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.240 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.240 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.240 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.240 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.240 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:31.240 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.240 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.240 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:31.240 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.240 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.240 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.240 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.240 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.498 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.498 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.498 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.756 00:19:31.756 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.756 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.756 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.014 { 00:19:32.014 "cntlid": 125, 00:19:32.014 "qid": 0, 00:19:32.014 "state": "enabled", 00:19:32.014 "thread": "nvmf_tgt_poll_group_000", 00:19:32.014 "listen_address": { 00:19:32.014 "trtype": "TCP", 00:19:32.014 "adrfam": "IPv4", 00:19:32.014 "traddr": "10.0.0.2", 00:19:32.014 "trsvcid": "4420" 00:19:32.014 }, 00:19:32.014 "peer_address": { 00:19:32.014 "trtype": "TCP", 00:19:32.014 "adrfam": "IPv4", 00:19:32.014 "traddr": "10.0.0.1", 00:19:32.014 "trsvcid": "39820" 00:19:32.014 }, 00:19:32.014 "auth": { 00:19:32.014 "state": "completed", 00:19:32.014 "digest": "sha512", 00:19:32.014 "dhgroup": "ffdhe4096" 00:19:32.014 } 00:19:32.014 } 00:19:32.014 ]' 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:32.014 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.271 18:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.271 18:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.271 18:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.530 18:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:19:33.467 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.467 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:33.467 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.467 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.467 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.467 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.467 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:33.467 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.726 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.984 00:19:33.984 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.984 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.984 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.242 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.242 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.242 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.242 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.243 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.243 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.243 { 00:19:34.243 "cntlid": 127, 00:19:34.243 "qid": 0, 00:19:34.243 "state": "enabled", 00:19:34.243 "thread": "nvmf_tgt_poll_group_000", 00:19:34.243 "listen_address": { 00:19:34.243 "trtype": "TCP", 00:19:34.243 "adrfam": "IPv4", 00:19:34.243 "traddr": "10.0.0.2", 00:19:34.243 "trsvcid": "4420" 00:19:34.243 }, 00:19:34.243 "peer_address": { 00:19:34.243 "trtype": "TCP", 00:19:34.243 "adrfam": "IPv4", 00:19:34.243 "traddr": "10.0.0.1", 00:19:34.243 "trsvcid": "39860" 00:19:34.243 }, 00:19:34.243 "auth": { 00:19:34.243 "state": "completed", 00:19:34.243 "digest": "sha512", 00:19:34.243 "dhgroup": "ffdhe4096" 00:19:34.243 } 00:19:34.243 } 00:19:34.243 ]' 00:19:34.243 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.501 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.501 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.501 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.501 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.501 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.501 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.501 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.760 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:19:35.695 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.695 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:35.695 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.695 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.695 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.696 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.696 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.696 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.696 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.955 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.956 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.522 00:19:36.522 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.522 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.522 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.780 { 00:19:36.780 "cntlid": 129, 00:19:36.780 "qid": 0, 00:19:36.780 "state": "enabled", 00:19:36.780 "thread": "nvmf_tgt_poll_group_000", 00:19:36.780 "listen_address": { 00:19:36.780 "trtype": "TCP", 00:19:36.780 "adrfam": "IPv4", 00:19:36.780 "traddr": "10.0.0.2", 00:19:36.780 "trsvcid": "4420" 00:19:36.780 }, 00:19:36.780 "peer_address": { 00:19:36.780 "trtype": "TCP", 00:19:36.780 "adrfam": "IPv4", 00:19:36.780 "traddr": "10.0.0.1", 00:19:36.780 "trsvcid": "39886" 00:19:36.780 }, 00:19:36.780 "auth": { 00:19:36.780 "state": "completed", 00:19:36.780 "digest": "sha512", 00:19:36.780 "dhgroup": "ffdhe6144" 00:19:36.780 } 00:19:36.780 } 00:19:36.780 ]' 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.780 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.347 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.305 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.306 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:38.306 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.306 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.306 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.306 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.306 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.306 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.306 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.306 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.241 00:19:39.241 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.241 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.241 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.499 { 00:19:39.499 "cntlid": 131, 00:19:39.499 "qid": 0, 00:19:39.499 "state": "enabled", 00:19:39.499 "thread": "nvmf_tgt_poll_group_000", 00:19:39.499 "listen_address": { 00:19:39.499 "trtype": "TCP", 00:19:39.499 "adrfam": "IPv4", 00:19:39.499 "traddr": "10.0.0.2", 00:19:39.499 "trsvcid": "4420" 00:19:39.499 }, 00:19:39.499 "peer_address": { 00:19:39.499 "trtype": "TCP", 00:19:39.499 "adrfam": "IPv4", 00:19:39.499 "traddr": "10.0.0.1", 00:19:39.499 "trsvcid": "39912" 00:19:39.499 }, 00:19:39.499 "auth": { 00:19:39.499 "state": "completed", 00:19:39.499 "digest": "sha512", 00:19:39.499 "dhgroup": "ffdhe6144" 00:19:39.499 } 00:19:39.499 } 00:19:39.499 ]' 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.499 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.757 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:19:40.694 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.694 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:40.694 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.694 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.694 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.694 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.694 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.694 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.260 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.827 00:19:41.827 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.827 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.827 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.086 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.086 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.086 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.086 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.086 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.086 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.086 { 00:19:42.086 "cntlid": 133, 00:19:42.086 "qid": 0, 00:19:42.086 "state": "enabled", 00:19:42.086 "thread": "nvmf_tgt_poll_group_000", 00:19:42.086 "listen_address": { 00:19:42.086 "trtype": "TCP", 00:19:42.086 "adrfam": "IPv4", 00:19:42.086 "traddr": "10.0.0.2", 00:19:42.086 "trsvcid": "4420" 00:19:42.086 }, 00:19:42.086 "peer_address": { 00:19:42.086 "trtype": "TCP", 00:19:42.086 "adrfam": "IPv4", 00:19:42.086 "traddr": "10.0.0.1", 00:19:42.086 "trsvcid": "53762" 00:19:42.086 }, 00:19:42.086 "auth": { 00:19:42.086 "state": "completed", 00:19:42.086 "digest": "sha512", 00:19:42.086 "dhgroup": "ffdhe6144" 00:19:42.086 } 00:19:42.086 } 00:19:42.086 ]' 00:19:42.086 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.346 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.346 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.346 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:42.346 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.346 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.346 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.346 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.605 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:19:43.578 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.578 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:43.578 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.578 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.578 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.578 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.578 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:43.578 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.837 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.405 00:19:44.405 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.405 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.405 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.664 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.664 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.664 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.664 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.664 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.664 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.664 { 00:19:44.664 "cntlid": 135, 00:19:44.664 "qid": 0, 00:19:44.664 "state": "enabled", 00:19:44.664 "thread": "nvmf_tgt_poll_group_000", 00:19:44.664 "listen_address": { 00:19:44.664 "trtype": "TCP", 00:19:44.664 "adrfam": "IPv4", 00:19:44.664 "traddr": "10.0.0.2", 00:19:44.664 "trsvcid": "4420" 00:19:44.664 }, 00:19:44.664 "peer_address": { 00:19:44.664 "trtype": "TCP", 00:19:44.664 "adrfam": "IPv4", 00:19:44.664 "traddr": "10.0.0.1", 00:19:44.664 "trsvcid": "53792" 00:19:44.664 }, 00:19:44.664 "auth": { 00:19:44.664 "state": "completed", 00:19:44.665 "digest": "sha512", 00:19:44.665 "dhgroup": "ffdhe6144" 00:19:44.665 } 00:19:44.665 } 00:19:44.665 ]' 00:19:44.665 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.665 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.665 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.665 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.665 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.923 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.923 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.923 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.489 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:19:46.057 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.058 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:46.058 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.058 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.058 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.058 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.058 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.058 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:46.058 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.626 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.562 00:19:47.562 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.562 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.562 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.562 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.562 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.562 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.562 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.562 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.562 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.562 { 00:19:47.562 "cntlid": 137, 00:19:47.562 "qid": 0, 00:19:47.562 "state": "enabled", 00:19:47.562 "thread": "nvmf_tgt_poll_group_000", 00:19:47.562 "listen_address": { 00:19:47.562 "trtype": "TCP", 00:19:47.562 "adrfam": "IPv4", 00:19:47.562 "traddr": "10.0.0.2", 00:19:47.562 "trsvcid": "4420" 00:19:47.562 }, 00:19:47.562 "peer_address": { 00:19:47.562 "trtype": "TCP", 00:19:47.562 "adrfam": "IPv4", 00:19:47.562 "traddr": "10.0.0.1", 00:19:47.562 "trsvcid": "53832" 00:19:47.562 }, 00:19:47.562 "auth": { 00:19:47.562 "state": "completed", 00:19:47.562 "digest": "sha512", 00:19:47.562 "dhgroup": "ffdhe8192" 00:19:47.562 } 00:19:47.562 } 00:19:47.562 ]' 00:19:47.562 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.821 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.821 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.821 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.821 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.821 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.821 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.821 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.080 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:19:49.016 18:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.016 18:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:49.016 18:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.016 18:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.016 18:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.016 18:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.016 18:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:49.016 18:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.275 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.841 00:19:49.841 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.841 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.841 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.100 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.100 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.100 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.100 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.101 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.101 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.101 { 00:19:50.101 "cntlid": 139, 00:19:50.101 "qid": 0, 00:19:50.101 "state": "enabled", 00:19:50.101 "thread": "nvmf_tgt_poll_group_000", 00:19:50.101 "listen_address": { 00:19:50.101 "trtype": "TCP", 00:19:50.101 "adrfam": "IPv4", 00:19:50.101 "traddr": "10.0.0.2", 00:19:50.101 "trsvcid": "4420" 00:19:50.101 }, 00:19:50.101 "peer_address": { 00:19:50.101 "trtype": "TCP", 00:19:50.101 "adrfam": "IPv4", 00:19:50.101 "traddr": "10.0.0.1", 00:19:50.101 "trsvcid": "55996" 00:19:50.101 }, 00:19:50.101 "auth": { 00:19:50.101 "state": "completed", 00:19:50.101 "digest": "sha512", 00:19:50.101 "dhgroup": "ffdhe8192" 00:19:50.101 } 00:19:50.101 } 00:19:50.101 ]' 00:19:50.101 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.101 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.101 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.359 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.359 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.359 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.359 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.359 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.618 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YjA4NzFiM2IzZGI5N2NkNmJkNzFlNTczMTMyMmVmODKx8F2R: --dhchap-ctrl-secret DHHC-1:02:ZjNkODFiYTBhYTc1MDkyZWRhOWEwYTVkYWU3ZWIzYjMxZWY3OGVmNDFlMDE5YTQwGhQquw==: 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.554 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.491 00:19:52.491 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.491 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.491 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.491 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.491 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.491 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.491 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.749 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.749 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.749 { 00:19:52.749 "cntlid": 141, 00:19:52.749 "qid": 0, 00:19:52.749 "state": "enabled", 00:19:52.749 "thread": "nvmf_tgt_poll_group_000", 00:19:52.749 "listen_address": { 00:19:52.749 "trtype": "TCP", 00:19:52.749 "adrfam": "IPv4", 00:19:52.749 "traddr": "10.0.0.2", 00:19:52.749 "trsvcid": "4420" 00:19:52.749 }, 00:19:52.749 "peer_address": { 00:19:52.749 "trtype": "TCP", 00:19:52.749 "adrfam": "IPv4", 00:19:52.749 "traddr": "10.0.0.1", 00:19:52.749 "trsvcid": "56028" 00:19:52.749 }, 00:19:52.749 "auth": { 00:19:52.749 "state": "completed", 00:19:52.749 "digest": "sha512", 00:19:52.750 "dhgroup": "ffdhe8192" 00:19:52.750 } 00:19:52.750 } 00:19:52.750 ]' 00:19:52.750 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.750 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.750 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.750 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.750 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.750 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.750 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.750 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.008 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDA5NGNjYjdkYzM1ODg5OGU5ZDljODQyZGRiYWZlZmVmYzY0NGVjNDIwYmRmYjNmgckcGQ==: --dhchap-ctrl-secret DHHC-1:01:YzBhYmI4NWQ1OGI2ZjY1OTBkMjBjYmNhZmQ1Njg2Njm9p7nX: 00:19:53.944 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.944 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:53.944 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.944 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.944 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.944 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.944 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:53.944 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.203 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.768 00:19:54.768 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.768 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.768 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.027 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.027 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.027 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.027 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.027 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.027 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.027 { 00:19:55.027 "cntlid": 143, 00:19:55.027 "qid": 0, 00:19:55.027 "state": "enabled", 00:19:55.027 "thread": "nvmf_tgt_poll_group_000", 00:19:55.027 "listen_address": { 00:19:55.027 "trtype": "TCP", 00:19:55.027 "adrfam": "IPv4", 00:19:55.027 "traddr": "10.0.0.2", 00:19:55.027 "trsvcid": "4420" 00:19:55.027 }, 00:19:55.027 "peer_address": { 00:19:55.027 "trtype": "TCP", 00:19:55.027 "adrfam": "IPv4", 00:19:55.027 "traddr": "10.0.0.1", 00:19:55.027 "trsvcid": "56066" 00:19:55.027 }, 00:19:55.027 "auth": { 00:19:55.027 "state": "completed", 00:19:55.027 "digest": "sha512", 00:19:55.027 "dhgroup": "ffdhe8192" 00:19:55.027 } 00:19:55.027 } 00:19:55.027 ]' 00:19:55.027 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.027 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.027 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.285 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.285 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.285 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.285 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.285 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.552 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.521 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.456 00:19:57.456 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.456 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.456 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.715 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.715 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.715 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.715 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.715 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.715 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.715 { 00:19:57.715 "cntlid": 145, 00:19:57.715 "qid": 0, 00:19:57.715 "state": "enabled", 00:19:57.715 "thread": "nvmf_tgt_poll_group_000", 00:19:57.715 "listen_address": { 00:19:57.715 "trtype": "TCP", 00:19:57.715 "adrfam": "IPv4", 00:19:57.715 "traddr": "10.0.0.2", 00:19:57.715 "trsvcid": "4420" 00:19:57.715 }, 00:19:57.715 "peer_address": { 00:19:57.715 "trtype": "TCP", 00:19:57.715 "adrfam": "IPv4", 00:19:57.715 "traddr": "10.0.0.1", 00:19:57.715 "trsvcid": "56090" 00:19:57.715 }, 00:19:57.715 "auth": { 00:19:57.715 "state": "completed", 00:19:57.715 "digest": "sha512", 00:19:57.715 "dhgroup": "ffdhe8192" 00:19:57.715 } 00:19:57.715 } 00:19:57.715 ]' 00:19:57.715 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.715 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.715 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.973 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.973 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.974 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.974 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.974 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.232 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTk3ZTcxM2M4MjI3ZmRiMTY1ZTA3ZDI3NzVjM2ExOGQ5ZTM3N2NiNjAyYzUwMDE5smIX+g==: --dhchap-ctrl-secret DHHC-1:03:YTZlZGY4M2MyYTZkMGUwY2FjNGJlZGIxZjgyOGRjYmExMTU5ODg3OWMwYjhiNTkxNzg4N2RmODUzNjgxOWU4MNYPly0=: 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:59.167 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.168 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:59.168 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:00.104 request: 00:20:00.104 { 00:20:00.104 "name": "nvme0", 00:20:00.104 "trtype": "tcp", 00:20:00.104 "traddr": "10.0.0.2", 00:20:00.104 "adrfam": "ipv4", 00:20:00.104 "trsvcid": "4420", 00:20:00.104 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:00.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:00.104 "prchk_reftag": false, 00:20:00.104 "prchk_guard": false, 00:20:00.104 "hdgst": false, 00:20:00.104 "ddgst": false, 00:20:00.104 "dhchap_key": "key2", 00:20:00.104 "method": "bdev_nvme_attach_controller", 00:20:00.104 "req_id": 1 00:20:00.104 } 00:20:00.104 Got JSON-RPC error response 00:20:00.104 response: 00:20:00.104 { 00:20:00.104 "code": -5, 00:20:00.104 "message": "Input/output error" 00:20:00.104 } 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.104 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:00.105 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:00.672 request: 00:20:00.672 { 00:20:00.672 "name": "nvme0", 00:20:00.672 "trtype": "tcp", 00:20:00.672 "traddr": "10.0.0.2", 00:20:00.672 "adrfam": "ipv4", 00:20:00.672 "trsvcid": "4420", 00:20:00.672 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:00.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:00.672 "prchk_reftag": false, 00:20:00.672 "prchk_guard": false, 00:20:00.672 "hdgst": false, 00:20:00.672 "ddgst": false, 00:20:00.672 "dhchap_key": "key1", 00:20:00.672 "dhchap_ctrlr_key": "ckey2", 00:20:00.672 "method": "bdev_nvme_attach_controller", 00:20:00.672 "req_id": 1 00:20:00.672 } 00:20:00.672 Got JSON-RPC error response 00:20:00.672 response: 00:20:00.672 { 00:20:00.672 "code": -5, 00:20:00.672 "message": "Input/output error" 00:20:00.672 } 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.672 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.609 request: 00:20:01.609 { 00:20:01.609 "name": "nvme0", 00:20:01.609 "trtype": "tcp", 00:20:01.609 "traddr": "10.0.0.2", 00:20:01.609 "adrfam": "ipv4", 00:20:01.609 "trsvcid": "4420", 00:20:01.609 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:01.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:01.609 "prchk_reftag": false, 00:20:01.609 "prchk_guard": false, 00:20:01.609 "hdgst": false, 00:20:01.609 "ddgst": false, 00:20:01.609 "dhchap_key": "key1", 00:20:01.609 "dhchap_ctrlr_key": "ckey1", 00:20:01.609 "method": "bdev_nvme_attach_controller", 00:20:01.609 "req_id": 1 00:20:01.609 } 00:20:01.609 Got JSON-RPC error response 00:20:01.609 response: 00:20:01.609 { 00:20:01.609 "code": -5, 00:20:01.609 "message": "Input/output error" 00:20:01.609 } 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2494880 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2494880 ']' 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2494880 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2494880 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2494880' 00:20:01.609 killing process with pid 2494880 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2494880 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2494880 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2526232 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2526232 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2526232 ']' 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.609 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2526232 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2526232 ']' 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.986 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.245 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.181 00:20:04.440 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.440 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.440 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.699 { 00:20:04.699 "cntlid": 1, 00:20:04.699 "qid": 0, 00:20:04.699 "state": "enabled", 00:20:04.699 "thread": "nvmf_tgt_poll_group_000", 00:20:04.699 "listen_address": { 00:20:04.699 "trtype": "TCP", 00:20:04.699 "adrfam": "IPv4", 00:20:04.699 "traddr": "10.0.0.2", 00:20:04.699 "trsvcid": "4420" 00:20:04.699 }, 00:20:04.699 "peer_address": { 00:20:04.699 "trtype": "TCP", 00:20:04.699 "adrfam": "IPv4", 00:20:04.699 "traddr": "10.0.0.1", 00:20:04.699 "trsvcid": "47232" 00:20:04.699 }, 00:20:04.699 "auth": { 00:20:04.699 "state": "completed", 00:20:04.699 "digest": "sha512", 00:20:04.699 "dhgroup": "ffdhe8192" 00:20:04.699 } 00:20:04.699 } 00:20:04.699 ]' 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.699 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.958 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzY1NGMzYjFlNWQ1OWIyNDJmM2U5YzBiMDliYjdmZGY5Y2E5MTVhMmFiMTAyMWQyZGRmNmZkNWZmNGFmYmU0MHtdyNM=: 00:20:05.893 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.894 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:05.894 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.894 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.894 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.894 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:05.894 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.894 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.894 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.894 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:05.894 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:06.461 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.461 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:06.461 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.461 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:06.461 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.461 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:06.461 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.461 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.461 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.461 request: 00:20:06.461 { 00:20:06.461 "name": "nvme0", 00:20:06.461 "trtype": "tcp", 00:20:06.461 "traddr": "10.0.0.2", 00:20:06.461 "adrfam": "ipv4", 00:20:06.461 "trsvcid": "4420", 00:20:06.461 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:06.461 "prchk_reftag": false, 00:20:06.461 "prchk_guard": false, 00:20:06.461 "hdgst": false, 00:20:06.461 "ddgst": false, 00:20:06.461 "dhchap_key": "key3", 00:20:06.461 "method": "bdev_nvme_attach_controller", 00:20:06.461 "req_id": 1 00:20:06.461 } 00:20:06.461 Got JSON-RPC error response 00:20:06.461 response: 00:20:06.461 { 00:20:06.461 "code": -5, 00:20:06.461 "message": "Input/output error" 00:20:06.461 } 00:20:06.719 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:06.719 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.719 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.719 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.719 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:06.719 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:06.719 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:06.719 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:06.977 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.977 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:06.977 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.977 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:06.977 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.977 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:06.977 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.977 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.977 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.236 request: 00:20:07.236 { 00:20:07.236 "name": "nvme0", 00:20:07.236 "trtype": "tcp", 00:20:07.236 "traddr": "10.0.0.2", 00:20:07.236 "adrfam": "ipv4", 00:20:07.236 "trsvcid": "4420", 00:20:07.236 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:07.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:07.236 "prchk_reftag": false, 00:20:07.236 "prchk_guard": false, 00:20:07.236 "hdgst": false, 00:20:07.236 "ddgst": false, 00:20:07.236 "dhchap_key": "key3", 00:20:07.236 "method": "bdev_nvme_attach_controller", 00:20:07.236 "req_id": 1 00:20:07.236 } 00:20:07.236 Got JSON-RPC error response 00:20:07.236 response: 00:20:07.236 { 00:20:07.236 "code": -5, 00:20:07.236 "message": "Input/output error" 00:20:07.236 } 00:20:07.236 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:07.236 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:07.236 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:07.236 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:07.236 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:07.236 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:07.236 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:07.236 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:07.236 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:07.236 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:07.494 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:07.494 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.494 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.494 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.494 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:07.494 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.494 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.494 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.494 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:07.495 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:07.495 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:07.495 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:07.495 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:07.495 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:07.495 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:07.495 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:07.495 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:07.753 request: 00:20:07.753 { 00:20:07.753 "name": "nvme0", 00:20:07.753 "trtype": "tcp", 00:20:07.753 "traddr": "10.0.0.2", 00:20:07.753 "adrfam": "ipv4", 00:20:07.753 "trsvcid": "4420", 00:20:07.753 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:07.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:07.753 "prchk_reftag": false, 00:20:07.753 "prchk_guard": false, 00:20:07.753 "hdgst": false, 00:20:07.753 "ddgst": false, 00:20:07.753 "dhchap_key": "key0", 00:20:07.753 "dhchap_ctrlr_key": "key1", 00:20:07.753 "method": "bdev_nvme_attach_controller", 00:20:07.753 "req_id": 1 00:20:07.753 } 00:20:07.753 Got JSON-RPC error response 00:20:07.753 response: 00:20:07.753 { 00:20:07.753 "code": -5, 00:20:07.753 "message": "Input/output error" 00:20:07.753 } 00:20:07.753 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:07.753 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:07.753 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:07.753 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:07.753 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:07.753 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:08.320 00:20:08.320 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:08.320 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:08.320 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.578 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.578 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.578 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2494927 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2494927 ']' 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2494927 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2494927 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2494927' 00:20:09.176 killing process with pid 2494927 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2494927 00:20:09.176 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2494927 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.435 rmmod nvme_tcp 00:20:09.435 rmmod nvme_fabrics 00:20:09.435 rmmod nvme_keyring 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2526232 ']' 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2526232 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2526232 ']' 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2526232 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526232 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526232' 00:20:09.435 killing process with pid 2526232 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2526232 00:20:09.435 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2526232 00:20:09.694 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:09.694 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:09.694 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:09.694 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.694 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.694 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.694 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.694 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.YpB /tmp/spdk.key-sha256.VZF /tmp/spdk.key-sha384.z6v /tmp/spdk.key-sha512.rTu /tmp/spdk.key-sha512.DpC /tmp/spdk.key-sha384.7VT /tmp/spdk.key-sha256.HaR '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:12.226 00:20:12.226 real 3m4.929s 00:20:12.226 user 7m13.552s 00:20:12.226 sys 0m25.162s 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.226 ************************************ 00:20:12.226 END TEST nvmf_auth_target 00:20:12.226 ************************************ 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.226 ************************************ 00:20:12.226 START TEST nvmf_bdevio_no_huge 00:20:12.226 ************************************ 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:12.226 * Looking for test storage... 00:20:12.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.226 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.227 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.498 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.498 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.498 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:17.499 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:17.499 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:17.499 Found net devices under 0000:af:00.0: cvl_0_0 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:17.499 Found net devices under 0000:af:00.1: cvl_0_1 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.499 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:20:17.759 00:20:17.759 --- 10.0.0.2 ping statistics --- 00:20:17.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.759 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:20:17.759 00:20:17.759 --- 10.0.0.1 ping statistics --- 00:20:17.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.759 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2531453 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2531453 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2531453 ']' 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.759 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.759 [2024-07-24 18:57:02.765328] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:20:17.759 [2024-07-24 18:57:02.765391] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:18.018 [2024-07-24 18:57:02.877030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.276 [2024-07-24 18:57:03.114293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.276 [2024-07-24 18:57:03.114366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.276 [2024-07-24 18:57:03.114389] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.276 [2024-07-24 18:57:03.114408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.276 [2024-07-24 18:57:03.114426] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.276 [2024-07-24 18:57:03.114573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:18.276 [2024-07-24 18:57:03.114689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:18.276 [2024-07-24 18:57:03.114808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:18.276 [2024-07-24 18:57:03.114816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.846 [2024-07-24 18:57:03.756870] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.846 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.846 Malloc0 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.847 [2024-07-24 18:57:03.806439] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.847 { 00:20:18.847 "params": { 00:20:18.847 "name": "Nvme$subsystem", 00:20:18.847 "trtype": "$TEST_TRANSPORT", 00:20:18.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.847 "adrfam": "ipv4", 00:20:18.847 "trsvcid": "$NVMF_PORT", 00:20:18.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.847 "hdgst": ${hdgst:-false}, 00:20:18.847 "ddgst": ${ddgst:-false} 00:20:18.847 }, 00:20:18.847 "method": "bdev_nvme_attach_controller" 00:20:18.847 } 00:20:18.847 EOF 00:20:18.847 )") 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:18.847 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:18.847 "params": { 00:20:18.847 "name": "Nvme1", 00:20:18.847 "trtype": "tcp", 00:20:18.847 "traddr": "10.0.0.2", 00:20:18.847 "adrfam": "ipv4", 00:20:18.847 "trsvcid": "4420", 00:20:18.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.847 "hdgst": false, 00:20:18.847 "ddgst": false 00:20:18.847 }, 00:20:18.847 "method": "bdev_nvme_attach_controller" 00:20:18.847 }' 00:20:19.105 [2024-07-24 18:57:03.859136] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:20:19.105 [2024-07-24 18:57:03.859200] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2531554 ] 00:20:19.105 [2024-07-24 18:57:03.945557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.105 [2024-07-24 18:57:04.063455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.105 [2024-07-24 18:57:04.063569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.105 [2024-07-24 18:57:04.063570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.363 I/O targets: 00:20:19.363 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:19.363 00:20:19.363 00:20:19.363 CUnit - A unit testing framework for C - Version 2.1-3 00:20:19.363 http://cunit.sourceforge.net/ 00:20:19.363 00:20:19.363 00:20:19.363 Suite: bdevio tests on: Nvme1n1 00:20:19.363 Test: blockdev write read block ...passed 00:20:19.363 Test: blockdev write zeroes read block ...passed 00:20:19.363 Test: blockdev write zeroes read no split ...passed 00:20:19.622 Test: blockdev write zeroes read split ...passed 00:20:19.622 Test: blockdev write zeroes read split partial ...passed 00:20:19.622 Test: blockdev reset ...[2024-07-24 18:57:04.435244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.622 [2024-07-24 18:57:04.435320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2428520 (9): Bad file descriptor 00:20:19.622 [2024-07-24 18:57:04.448815] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:19.622 passed 00:20:19.622 Test: blockdev write read 8 blocks ...passed 00:20:19.622 Test: blockdev write read size > 128k ...passed 00:20:19.622 Test: blockdev write read invalid size ...passed 00:20:19.622 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.622 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.622 Test: blockdev write read max offset ...passed 00:20:19.622 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.622 Test: blockdev writev readv 8 blocks ...passed 00:20:19.622 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.881 Test: blockdev writev readv block ...passed 00:20:19.881 Test: blockdev writev readv size > 128k ...passed 00:20:19.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.881 Test: blockdev comparev and writev ...[2024-07-24 18:57:04.667684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.881 [2024-07-24 18:57:04.667750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.881 [2024-07-24 18:57:04.667793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.881 [2024-07-24 18:57:04.667818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.881 [2024-07-24 18:57:04.668440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.881 [2024-07-24 18:57:04.668474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:19.881 [2024-07-24 18:57:04.668511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.881 [2024-07-24 18:57:04.668534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:19.881 [2024-07-24 18:57:04.669151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.881 [2024-07-24 18:57:04.669184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:19.881 [2024-07-24 18:57:04.669220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.881 [2024-07-24 18:57:04.669243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:19.881 [2024-07-24 18:57:04.669855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.881 [2024-07-24 18:57:04.669887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:19.881 [2024-07-24 18:57:04.669925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:19.881 [2024-07-24 18:57:04.669947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:19.881 passed 00:20:19.881 Test: blockdev nvme passthru rw ...passed 00:20:19.881 Test: blockdev nvme passthru vendor specific ...[2024-07-24 18:57:04.752180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.881 [2024-07-24 18:57:04.752221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:19.881 [2024-07-24 18:57:04.752507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.881 [2024-07-24 18:57:04.752537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:19.881 [2024-07-24 18:57:04.752826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.881 [2024-07-24 18:57:04.752857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:19.881 [2024-07-24 18:57:04.753139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.881 [2024-07-24 18:57:04.753169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:19.881 passed 00:20:19.881 Test: blockdev nvme admin passthru ...passed 00:20:19.881 Test: blockdev copy ...passed 00:20:19.881 00:20:19.881 Run Summary: Type Total Ran Passed Failed Inactive 00:20:19.881 suites 1 1 n/a 0 0 00:20:19.881 tests 23 23 23 0 0 00:20:19.881 asserts 152 152 152 0 n/a 00:20:19.881 00:20:19.881 Elapsed time = 1.165 seconds 00:20:20.449 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.449 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.449 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:20.449 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.449 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:20.449 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:20.449 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.450 rmmod nvme_tcp 00:20:20.450 rmmod nvme_fabrics 00:20:20.450 rmmod nvme_keyring 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2531453 ']' 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2531453 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2531453 ']' 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2531453 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2531453 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2531453' 00:20:20.450 killing process with pid 2531453 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2531453 00:20:20.450 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2531453 00:20:21.017 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:21.275 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:21.275 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:21.275 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.275 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.275 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.275 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.275 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.176 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:23.176 00:20:23.176 real 0m11.322s 00:20:23.176 user 0m14.428s 00:20:23.176 sys 0m5.786s 00:20:23.176 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.176 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:23.176 ************************************ 00:20:23.176 END TEST nvmf_bdevio_no_huge 00:20:23.176 ************************************ 00:20:23.176 18:57:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:23.176 18:57:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:23.176 18:57:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.176 18:57:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:23.176 ************************************ 00:20:23.176 START TEST nvmf_tls 00:20:23.176 ************************************ 00:20:23.176 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:23.437 * Looking for test storage... 00:20:23.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.437 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.438 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.019 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.019 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:30.019 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:30.019 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:30.019 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:30.019 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:30.019 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:30.019 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:30.019 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:30.020 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:30.020 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:30.020 Found net devices under 0000:af:00.0: cvl_0_0 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:30.020 Found net devices under 0000:af:00.1: cvl_0_1 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.020 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:30.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:20:30.020 00:20:30.020 --- 10.0.0.2 ping statistics --- 00:20:30.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.020 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:20:30.020 00:20:30.020 --- 10.0.0.1 ping statistics --- 00:20:30.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.020 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2535868 00:20:30.020 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2535868 00:20:30.021 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:30.021 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2535868 ']' 00:20:30.021 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.021 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.021 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.021 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.021 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.021 [2024-07-24 18:57:14.329827] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:20:30.021 [2024-07-24 18:57:14.329886] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.021 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.021 [2024-07-24 18:57:14.420420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.021 [2024-07-24 18:57:14.524171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.021 [2024-07-24 18:57:14.524217] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.021 [2024-07-24 18:57:14.524231] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.021 [2024-07-24 18:57:14.524242] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.021 [2024-07-24 18:57:14.524252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.021 [2024-07-24 18:57:14.524279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.281 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.539 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:30.539 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:30.539 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:30.539 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.539 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.539 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:30.539 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:30.797 true 00:20:30.797 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.797 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:31.059 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:31.059 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:31.059 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:31.316 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.317 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:31.574 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:31.574 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:31.574 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:31.834 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.834 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:32.093 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:32.093 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:32.093 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.093 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:32.093 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:32.093 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:32.093 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:32.352 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.352 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:32.611 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:32.611 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:32.611 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:32.870 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.870 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:33.128 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:33.128 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:33.128 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:33.128 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:33.128 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:33.128 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:33.129 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:33.129 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:33.129 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.1jfW9bJNNq 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.TvZj8QgV66 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.1jfW9bJNNq 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TvZj8QgV66 00:20:33.387 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:33.646 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:33.905 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.1jfW9bJNNq 00:20:33.905 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1jfW9bJNNq 00:20:33.905 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:34.164 [2024-07-24 18:57:19.017820] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.164 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:34.424 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:34.683 [2024-07-24 18:57:19.507154] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.683 [2024-07-24 18:57:19.507422] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.683 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.943 malloc0 00:20:34.943 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:35.201 18:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1jfW9bJNNq 00:20:35.460 [2024-07-24 18:57:20.255593] tcp.c:3771:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:35.460 18:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1jfW9bJNNq 00:20:35.460 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.458 Initializing NVMe Controllers 00:20:45.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:45.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:45.458 Initialization complete. Launching workers. 00:20:45.458 ======================================================== 00:20:45.458 Latency(us) 00:20:45.458 Device Information : IOPS MiB/s Average min max 00:20:45.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8365.19 32.68 7653.05 1159.69 8479.24 00:20:45.458 ======================================================== 00:20:45.458 Total : 8365.19 32.68 7653.05 1159.69 8479.24 00:20:45.458 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1jfW9bJNNq 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1jfW9bJNNq' 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2538796 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2538796 /var/tmp/bdevperf.sock 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2538796 ']' 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.458 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.717 [2024-07-24 18:57:30.472720] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:20:45.717 [2024-07-24 18:57:30.472779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538796 ] 00:20:45.717 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.718 [2024-07-24 18:57:30.584666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.976 [2024-07-24 18:57:30.733422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.587 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.587 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:46.587 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1jfW9bJNNq 00:20:46.846 [2024-07-24 18:57:31.666747] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.846 [2024-07-24 18:57:31.666901] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:46.846 TLSTESTn1 00:20:46.846 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:47.108 Running I/O for 10 seconds... 00:20:57.092 00:20:57.092 Latency(us) 00:20:57.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.092 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.092 Verification LBA range: start 0x0 length 0x2000 00:20:57.092 TLSTESTn1 : 10.03 2825.78 11.04 0.00 0.00 45165.15 12332.68 63867.81 00:20:57.092 =================================================================================================================== 00:20:57.092 Total : 2825.78 11.04 0.00 0.00 45165.15 12332.68 63867.81 00:20:57.092 0 00:20:57.092 18:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.092 18:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2538796 00:20:57.092 18:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2538796 ']' 00:20:57.092 18:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2538796 00:20:57.092 18:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:57.092 18:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.092 18:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2538796 00:20:57.092 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:57.092 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:57.093 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2538796' 00:20:57.093 killing process with pid 2538796 00:20:57.093 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2538796 00:20:57.093 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.093 00:20:57.093 Latency(us) 00:20:57.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.093 =================================================================================================================== 00:20:57.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.093 [2024-07-24 18:57:42.027366] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.093 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2538796 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TvZj8QgV66 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TvZj8QgV66 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TvZj8QgV66 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.661 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TvZj8QgV66' 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2540891 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2540891 /var/tmp/bdevperf.sock 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2540891 ']' 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.662 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.662 [2024-07-24 18:57:42.441351] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:20:57.662 [2024-07-24 18:57:42.441423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540891 ] 00:20:57.662 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.662 [2024-07-24 18:57:42.555450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.920 [2024-07-24 18:57:42.703255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.484 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.484 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:58.484 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TvZj8QgV66 00:20:58.742 [2024-07-24 18:57:43.625425] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.742 [2024-07-24 18:57:43.625575] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:58.742 [2024-07-24 18:57:43.634152] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:58.742 [2024-07-24 18:57:43.634469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c44af0 (107): Transport endpoint is not connected 00:20:58.742 [2024-07-24 18:57:43.635451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c44af0 (9): Bad file descriptor 00:20:58.742 [2024-07-24 18:57:43.636449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:58.742 [2024-07-24 18:57:43.636476] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:58.742 [2024-07-24 18:57:43.636501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:58.742 request: 00:20:58.742 { 00:20:58.742 "name": "TLSTEST", 00:20:58.742 "trtype": "tcp", 00:20:58.742 "traddr": "10.0.0.2", 00:20:58.742 "adrfam": "ipv4", 00:20:58.742 "trsvcid": "4420", 00:20:58.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.742 "prchk_reftag": false, 00:20:58.742 "prchk_guard": false, 00:20:58.742 "hdgst": false, 00:20:58.742 "ddgst": false, 00:20:58.742 "psk": "/tmp/tmp.TvZj8QgV66", 00:20:58.742 "method": "bdev_nvme_attach_controller", 00:20:58.742 "req_id": 1 00:20:58.742 } 00:20:58.742 Got JSON-RPC error response 00:20:58.742 response: 00:20:58.742 { 00:20:58.742 "code": -5, 00:20:58.742 "message": "Input/output error" 00:20:58.742 } 00:20:58.742 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2540891 00:20:58.742 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2540891 ']' 00:20:58.742 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2540891 00:20:58.742 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.742 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.742 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2540891 00:20:58.742 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:58.742 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:58.742 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2540891' 00:20:58.742 killing process with pid 2540891 00:20:58.742 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2540891 00:20:58.742 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.742 00:20:58.742 Latency(us) 00:20:58.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.742 =================================================================================================================== 00:20:58.743 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.743 [2024-07-24 18:57:43.719424] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:58.743 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2540891 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1jfW9bJNNq 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1jfW9bJNNq 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1jfW9bJNNq 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1jfW9bJNNq' 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2541165 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2541165 /var/tmp/bdevperf.sock 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2541165 ']' 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.310 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.310 [2024-07-24 18:57:44.091004] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:20:59.310 [2024-07-24 18:57:44.091074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541165 ] 00:20:59.310 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.310 [2024-07-24 18:57:44.206506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.569 [2024-07-24 18:57:44.346015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.137 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.137 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:00.137 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.1jfW9bJNNq 00:21:00.396 [2024-07-24 18:57:45.204814] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.396 [2024-07-24 18:57:45.204977] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:00.396 [2024-07-24 18:57:45.214505] tcp.c: 946:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:00.396 [2024-07-24 18:57:45.214540] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:00.396 [2024-07-24 18:57:45.214578] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:00.396 [2024-07-24 18:57:45.215085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eaaf0 (107): Transport endpoint is not connected 00:21:00.396 [2024-07-24 18:57:45.216068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eaaf0 (9): Bad file descriptor 00:21:00.396 [2024-07-24 18:57:45.217067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.396 [2024-07-24 18:57:45.217093] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:00.396 [2024-07-24 18:57:45.217119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.396 request: 00:21:00.396 { 00:21:00.396 "name": "TLSTEST", 00:21:00.396 "trtype": "tcp", 00:21:00.396 "traddr": "10.0.0.2", 00:21:00.396 "adrfam": "ipv4", 00:21:00.396 "trsvcid": "4420", 00:21:00.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.396 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:00.396 "prchk_reftag": false, 00:21:00.396 "prchk_guard": false, 00:21:00.396 "hdgst": false, 00:21:00.396 "ddgst": false, 00:21:00.396 "psk": "/tmp/tmp.1jfW9bJNNq", 00:21:00.396 "method": "bdev_nvme_attach_controller", 00:21:00.396 "req_id": 1 00:21:00.396 } 00:21:00.396 Got JSON-RPC error response 00:21:00.396 response: 00:21:00.396 { 00:21:00.396 "code": -5, 00:21:00.396 "message": "Input/output error" 00:21:00.396 } 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2541165 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2541165 ']' 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2541165 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2541165 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2541165' 00:21:00.396 killing process with pid 2541165 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2541165 00:21:00.396 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.396 00:21:00.396 Latency(us) 00:21:00.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.396 =================================================================================================================== 00:21:00.396 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.396 [2024-07-24 18:57:45.297280] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:00.396 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2541165 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1jfW9bJNNq 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1jfW9bJNNq 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1jfW9bJNNq 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1jfW9bJNNq' 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2541443 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2541443 /var/tmp/bdevperf.sock 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2541443 ']' 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.655 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.655 [2024-07-24 18:57:45.633708] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:00.655 [2024-07-24 18:57:45.633770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541443 ] 00:21:00.915 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.915 [2024-07-24 18:57:45.746422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.915 [2024-07-24 18:57:45.889130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.850 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.850 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.850 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1jfW9bJNNq 00:21:01.850 [2024-07-24 18:57:46.831512] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.850 [2024-07-24 18:57:46.831673] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:01.850 [2024-07-24 18:57:46.840283] tcp.c: 946:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:01.850 [2024-07-24 18:57:46.840319] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:01.850 [2024-07-24 18:57:46.840359] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:01.850 [2024-07-24 18:57:46.840691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71af0 (107): Transport endpoint is not connected 00:21:01.850 [2024-07-24 18:57:46.841670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71af0 (9): Bad file descriptor 00:21:01.850 [2024-07-24 18:57:46.842666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:01.850 [2024-07-24 18:57:46.842694] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:01.850 [2024-07-24 18:57:46.842721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:01.850 request: 00:21:01.850 { 00:21:01.850 "name": "TLSTEST", 00:21:01.850 "trtype": "tcp", 00:21:01.850 "traddr": "10.0.0.2", 00:21:01.850 "adrfam": "ipv4", 00:21:01.850 "trsvcid": "4420", 00:21:01.850 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:01.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.850 "prchk_reftag": false, 00:21:01.850 "prchk_guard": false, 00:21:01.850 "hdgst": false, 00:21:01.850 "ddgst": false, 00:21:01.850 "psk": "/tmp/tmp.1jfW9bJNNq", 00:21:01.850 "method": "bdev_nvme_attach_controller", 00:21:01.850 "req_id": 1 00:21:01.850 } 00:21:01.850 Got JSON-RPC error response 00:21:01.850 response: 00:21:01.850 { 00:21:01.850 "code": -5, 00:21:01.850 "message": "Input/output error" 00:21:01.850 } 00:21:02.108 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2541443 00:21:02.108 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2541443 ']' 00:21:02.108 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2541443 00:21:02.108 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:02.108 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.108 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2541443 00:21:02.108 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:02.109 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:02.109 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2541443' 00:21:02.109 killing process with pid 2541443 00:21:02.109 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2541443 00:21:02.109 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.109 00:21:02.109 Latency(us) 00:21:02.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.109 =================================================================================================================== 00:21:02.109 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.109 [2024-07-24 18:57:46.926038] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:02.109 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2541443 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2541711 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2541711 /var/tmp/bdevperf.sock 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2541711 ']' 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.367 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.367 [2024-07-24 18:57:47.261794] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:02.367 [2024-07-24 18:57:47.261861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541711 ] 00:21:02.367 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.626 [2024-07-24 18:57:47.375969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.626 [2024-07-24 18:57:47.515537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:03.563 [2024-07-24 18:57:48.443584] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:03.563 [2024-07-24 18:57:48.445519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125a030 (9): Bad file descriptor 00:21:03.563 [2024-07-24 18:57:48.446514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.563 [2024-07-24 18:57:48.446541] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:03.563 [2024-07-24 18:57:48.446567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.563 request: 00:21:03.563 { 00:21:03.563 "name": "TLSTEST", 00:21:03.563 "trtype": "tcp", 00:21:03.563 "traddr": "10.0.0.2", 00:21:03.563 "adrfam": "ipv4", 00:21:03.563 "trsvcid": "4420", 00:21:03.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.563 "prchk_reftag": false, 00:21:03.563 "prchk_guard": false, 00:21:03.563 "hdgst": false, 00:21:03.563 "ddgst": false, 00:21:03.563 "method": "bdev_nvme_attach_controller", 00:21:03.563 "req_id": 1 00:21:03.563 } 00:21:03.563 Got JSON-RPC error response 00:21:03.563 response: 00:21:03.563 { 00:21:03.563 "code": -5, 00:21:03.563 "message": "Input/output error" 00:21:03.563 } 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2541711 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2541711 ']' 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2541711 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2541711 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2541711' 00:21:03.563 killing process with pid 2541711 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2541711 00:21:03.563 Received shutdown signal, test time was about 10.000000 seconds 00:21:03.563 00:21:03.563 Latency(us) 00:21:03.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.563 =================================================================================================================== 00:21:03.563 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.563 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2541711 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2535868 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2535868 ']' 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2535868 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.822 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2535868 00:21:04.081 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:04.081 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:04.081 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2535868' 00:21:04.081 killing process with pid 2535868 00:21:04.081 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2535868 00:21:04.081 [2024-07-24 18:57:48.854020] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:04.081 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2535868 00:21:04.340 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:04.340 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:04.340 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:04.340 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:04.340 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:04.340 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:04.340 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.bT2ZLoSqIU 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.bT2ZLoSqIU 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2542162 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2542162 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2542162 ']' 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.341 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.341 [2024-07-24 18:57:49.215814] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:04.341 [2024-07-24 18:57:49.215882] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.341 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.341 [2024-07-24 18:57:49.303637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.600 [2024-07-24 18:57:49.402900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.600 [2024-07-24 18:57:49.402953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.600 [2024-07-24 18:57:49.402966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.600 [2024-07-24 18:57:49.402977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.600 [2024-07-24 18:57:49.402987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.600 [2024-07-24 18:57:49.403024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.184 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.184 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:05.184 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:05.184 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:05.184 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.443 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.443 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.bT2ZLoSqIU 00:21:05.443 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.bT2ZLoSqIU 00:21:05.443 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:05.443 [2024-07-24 18:57:50.355303] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.443 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:05.705 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:05.963 [2024-07-24 18:57:50.872707] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.963 [2024-07-24 18:57:50.872953] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.963 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:06.220 malloc0 00:21:06.220 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:06.477 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bT2ZLoSqIU 00:21:06.736 [2024-07-24 18:57:51.649123] tcp.c:3771:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bT2ZLoSqIU 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bT2ZLoSqIU' 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2542548 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2542548 /var/tmp/bdevperf.sock 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2542548 ']' 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.736 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.736 [2024-07-24 18:57:51.735434] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:06.736 [2024-07-24 18:57:51.735498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542548 ] 00:21:06.994 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.994 [2024-07-24 18:57:51.848674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.994 [2024-07-24 18:57:51.998908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.930 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.930 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:07.930 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bT2ZLoSqIU 00:21:07.930 [2024-07-24 18:57:52.920689] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.930 [2024-07-24 18:57:52.920845] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:08.188 TLSTESTn1 00:21:08.188 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:08.188 Running I/O for 10 seconds... 00:21:20.388 00:21:20.388 Latency(us) 00:21:20.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.388 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.388 Verification LBA range: start 0x0 length 0x2000 00:21:20.388 TLSTESTn1 : 10.02 2824.06 11.03 0.00 0.00 45205.70 9175.04 48139.17 00:21:20.389 =================================================================================================================== 00:21:20.389 Total : 2824.06 11.03 0.00 0.00 45205.70 9175.04 48139.17 00:21:20.389 0 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2542548 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2542548 ']' 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2542548 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2542548 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2542548' 00:21:20.389 killing process with pid 2542548 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2542548 00:21:20.389 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.389 00:21:20.389 Latency(us) 00:21:20.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.389 =================================================================================================================== 00:21:20.389 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.389 [2024-07-24 18:58:03.271106] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2542548 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.bT2ZLoSqIU 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bT2ZLoSqIU 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bT2ZLoSqIU 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bT2ZLoSqIU 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bT2ZLoSqIU' 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2544638 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2544638 /var/tmp/bdevperf.sock 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2544638 ']' 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.389 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.389 [2024-07-24 18:58:03.646882] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:20.389 [2024-07-24 18:58:03.646953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544638 ] 00:21:20.389 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.389 [2024-07-24 18:58:03.760706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.389 [2024-07-24 18:58:03.899765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bT2ZLoSqIU 00:21:20.389 [2024-07-24 18:58:04.835716] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.389 [2024-07-24 18:58:04.835822] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:20.389 [2024-07-24 18:58:04.835843] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.bT2ZLoSqIU 00:21:20.389 request: 00:21:20.389 { 00:21:20.389 "name": "TLSTEST", 00:21:20.389 "trtype": "tcp", 00:21:20.389 "traddr": "10.0.0.2", 00:21:20.389 "adrfam": "ipv4", 00:21:20.389 "trsvcid": "4420", 00:21:20.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.389 "prchk_reftag": false, 00:21:20.389 "prchk_guard": false, 00:21:20.389 "hdgst": false, 00:21:20.389 "ddgst": false, 00:21:20.389 "psk": "/tmp/tmp.bT2ZLoSqIU", 00:21:20.389 "method": "bdev_nvme_attach_controller", 00:21:20.389 "req_id": 1 00:21:20.389 } 00:21:20.389 Got JSON-RPC error response 00:21:20.389 response: 00:21:20.389 { 00:21:20.389 "code": -1, 00:21:20.389 "message": "Operation not permitted" 00:21:20.389 } 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2544638 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2544638 ']' 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2544638 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2544638 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2544638' 00:21:20.389 killing process with pid 2544638 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2544638 00:21:20.389 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.389 00:21:20.389 Latency(us) 00:21:20.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.389 =================================================================================================================== 00:21:20.389 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.389 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2544638 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2542162 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2542162 ']' 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2542162 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2542162 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2542162' 00:21:20.389 killing process with pid 2542162 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2542162 00:21:20.389 [2024-07-24 18:58:05.245697] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:20.389 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2542162 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2544946 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2544946 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2544946 ']' 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.649 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.649 [2024-07-24 18:58:05.542241] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:20.650 [2024-07-24 18:58:05.542305] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.650 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.650 [2024-07-24 18:58:05.629447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.909 [2024-07-24 18:58:05.734108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.909 [2024-07-24 18:58:05.734154] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.909 [2024-07-24 18:58:05.734168] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.909 [2024-07-24 18:58:05.734179] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.909 [2024-07-24 18:58:05.734189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.909 [2024-07-24 18:58:05.734216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.845 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.845 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.845 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.845 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.845 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.bT2ZLoSqIU 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.bT2ZLoSqIU 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.bT2ZLoSqIU 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.bT2ZLoSqIU 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:21.846 [2024-07-24 18:58:06.766779] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.846 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:22.118 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:22.376 [2024-07-24 18:58:07.288198] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.376 [2024-07-24 18:58:07.288441] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.376 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:22.634 malloc0 00:21:22.634 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:22.893 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bT2ZLoSqIU 00:21:23.152 [2024-07-24 18:58:08.060625] tcp.c:3681:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:23.152 [2024-07-24 18:58:08.060665] tcp.c:3767:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:23.152 [2024-07-24 18:58:08.060702] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:23.152 request: 00:21:23.152 { 00:21:23.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.152 "host": "nqn.2016-06.io.spdk:host1", 00:21:23.152 "psk": "/tmp/tmp.bT2ZLoSqIU", 00:21:23.152 "method": "nvmf_subsystem_add_host", 00:21:23.152 "req_id": 1 00:21:23.152 } 00:21:23.152 Got JSON-RPC error response 00:21:23.152 response: 00:21:23.152 { 00:21:23.152 "code": -32603, 00:21:23.152 "message": "Internal error" 00:21:23.152 } 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2544946 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2544946 ']' 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2544946 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2544946 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2544946' 00:21:23.152 killing process with pid 2544946 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2544946 00:21:23.152 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2544946 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.bT2ZLoSqIU 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2545491 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2545491 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2545491 ']' 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.720 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.720 [2024-07-24 18:58:08.513753] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:23.720 [2024-07-24 18:58:08.513821] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.720 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.720 [2024-07-24 18:58:08.602517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.720 [2024-07-24 18:58:08.702071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.720 [2024-07-24 18:58:08.702121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.720 [2024-07-24 18:58:08.702134] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.720 [2024-07-24 18:58:08.702146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.720 [2024-07-24 18:58:08.702155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.720 [2024-07-24 18:58:08.702183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.657 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.657 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:24.657 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.657 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:24.657 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.657 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.657 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.bT2ZLoSqIU 00:21:24.657 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.bT2ZLoSqIU 00:21:24.657 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:24.657 [2024-07-24 18:58:09.650761] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.915 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:25.173 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:25.173 [2024-07-24 18:58:10.164177] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.173 [2024-07-24 18:58:10.164423] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.431 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:25.431 malloc0 00:21:25.690 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:25.950 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bT2ZLoSqIU 00:21:25.950 [2024-07-24 18:58:10.928560] tcp.c:3771:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:26.211 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2546021 00:21:26.211 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:26.211 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.211 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2546021 /var/tmp/bdevperf.sock 00:21:26.211 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2546021 ']' 00:21:26.211 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.211 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.211 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.211 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.211 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.211 [2024-07-24 18:58:11.012176] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:26.211 [2024-07-24 18:58:11.012237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546021 ] 00:21:26.211 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.211 [2024-07-24 18:58:11.124821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.469 [2024-07-24 18:58:11.269497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.034 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.034 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:27.034 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bT2ZLoSqIU 00:21:27.292 [2024-07-24 18:58:12.186050] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.292 [2024-07-24 18:58:12.186201] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:27.292 TLSTESTn1 00:21:27.292 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:27.858 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:27.858 "subsystems": [ 00:21:27.858 { 00:21:27.858 "subsystem": "keyring", 00:21:27.858 "config": [] 00:21:27.858 }, 00:21:27.858 { 00:21:27.858 "subsystem": "iobuf", 00:21:27.858 "config": [ 00:21:27.858 { 00:21:27.858 "method": "iobuf_set_options", 00:21:27.858 "params": { 00:21:27.858 "small_pool_count": 8192, 00:21:27.858 "large_pool_count": 1024, 00:21:27.858 "small_bufsize": 8192, 00:21:27.858 "large_bufsize": 135168 00:21:27.858 } 00:21:27.858 } 00:21:27.858 ] 00:21:27.858 }, 00:21:27.858 { 00:21:27.858 "subsystem": "sock", 00:21:27.858 "config": [ 00:21:27.858 { 00:21:27.858 "method": "sock_set_default_impl", 00:21:27.858 "params": { 00:21:27.858 "impl_name": "posix" 00:21:27.858 } 00:21:27.858 }, 00:21:27.858 { 00:21:27.858 "method": "sock_impl_set_options", 00:21:27.858 "params": { 00:21:27.858 "impl_name": "ssl", 00:21:27.858 "recv_buf_size": 4096, 00:21:27.858 "send_buf_size": 4096, 00:21:27.858 "enable_recv_pipe": true, 00:21:27.858 "enable_quickack": false, 00:21:27.858 "enable_placement_id": 0, 00:21:27.859 "enable_zerocopy_send_server": true, 00:21:27.859 "enable_zerocopy_send_client": false, 00:21:27.859 "zerocopy_threshold": 0, 00:21:27.859 "tls_version": 0, 00:21:27.859 "enable_ktls": false 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "sock_impl_set_options", 00:21:27.859 "params": { 00:21:27.859 "impl_name": "posix", 00:21:27.859 "recv_buf_size": 2097152, 00:21:27.859 "send_buf_size": 2097152, 00:21:27.859 "enable_recv_pipe": true, 00:21:27.859 "enable_quickack": false, 00:21:27.859 "enable_placement_id": 0, 00:21:27.859 "enable_zerocopy_send_server": true, 00:21:27.859 "enable_zerocopy_send_client": false, 00:21:27.859 "zerocopy_threshold": 0, 00:21:27.859 "tls_version": 0, 00:21:27.859 "enable_ktls": false 00:21:27.859 } 00:21:27.859 } 00:21:27.859 ] 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "subsystem": "vmd", 00:21:27.859 "config": [] 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "subsystem": "accel", 00:21:27.859 "config": [ 00:21:27.859 { 00:21:27.859 "method": "accel_set_options", 00:21:27.859 "params": { 00:21:27.859 "small_cache_size": 128, 00:21:27.859 "large_cache_size": 16, 00:21:27.859 "task_count": 2048, 00:21:27.859 "sequence_count": 2048, 00:21:27.859 "buf_count": 2048 00:21:27.859 } 00:21:27.859 } 00:21:27.859 ] 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "subsystem": "bdev", 00:21:27.859 "config": [ 00:21:27.859 { 00:21:27.859 "method": "bdev_set_options", 00:21:27.859 "params": { 00:21:27.859 "bdev_io_pool_size": 65535, 00:21:27.859 "bdev_io_cache_size": 256, 00:21:27.859 "bdev_auto_examine": true, 00:21:27.859 "iobuf_small_cache_size": 128, 00:21:27.859 "iobuf_large_cache_size": 16 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "bdev_raid_set_options", 00:21:27.859 "params": { 00:21:27.859 "process_window_size_kb": 1024, 00:21:27.859 "process_max_bandwidth_mb_sec": 0 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "bdev_iscsi_set_options", 00:21:27.859 "params": { 00:21:27.859 "timeout_sec": 30 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "bdev_nvme_set_options", 00:21:27.859 "params": { 00:21:27.859 "action_on_timeout": "none", 00:21:27.859 "timeout_us": 0, 00:21:27.859 "timeout_admin_us": 0, 00:21:27.859 "keep_alive_timeout_ms": 10000, 00:21:27.859 "arbitration_burst": 0, 00:21:27.859 "low_priority_weight": 0, 00:21:27.859 "medium_priority_weight": 0, 00:21:27.859 "high_priority_weight": 0, 00:21:27.859 "nvme_adminq_poll_period_us": 10000, 00:21:27.859 "nvme_ioq_poll_period_us": 0, 00:21:27.859 "io_queue_requests": 0, 00:21:27.859 "delay_cmd_submit": true, 00:21:27.859 "transport_retry_count": 4, 00:21:27.859 "bdev_retry_count": 3, 00:21:27.859 "transport_ack_timeout": 0, 00:21:27.859 "ctrlr_loss_timeout_sec": 0, 00:21:27.859 "reconnect_delay_sec": 0, 00:21:27.859 "fast_io_fail_timeout_sec": 0, 00:21:27.859 "disable_auto_failback": false, 00:21:27.859 "generate_uuids": false, 00:21:27.859 "transport_tos": 0, 00:21:27.859 "nvme_error_stat": false, 00:21:27.859 "rdma_srq_size": 0, 00:21:27.859 "io_path_stat": false, 00:21:27.859 "allow_accel_sequence": false, 00:21:27.859 "rdma_max_cq_size": 0, 00:21:27.859 "rdma_cm_event_timeout_ms": 0, 00:21:27.859 "dhchap_digests": [ 00:21:27.859 "sha256", 00:21:27.859 "sha384", 00:21:27.859 "sha512" 00:21:27.859 ], 00:21:27.859 "dhchap_dhgroups": [ 00:21:27.859 "null", 00:21:27.859 "ffdhe2048", 00:21:27.859 "ffdhe3072", 00:21:27.859 "ffdhe4096", 00:21:27.859 "ffdhe6144", 00:21:27.859 "ffdhe8192" 00:21:27.859 ] 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "bdev_nvme_set_hotplug", 00:21:27.859 "params": { 00:21:27.859 "period_us": 100000, 00:21:27.859 "enable": false 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "bdev_malloc_create", 00:21:27.859 "params": { 00:21:27.859 "name": "malloc0", 00:21:27.859 "num_blocks": 8192, 00:21:27.859 "block_size": 4096, 00:21:27.859 "physical_block_size": 4096, 00:21:27.859 "uuid": "fa78bcc5-f6bb-4ba8-85b4-5a256fa01f03", 00:21:27.859 "optimal_io_boundary": 0, 00:21:27.859 "md_size": 0, 00:21:27.859 "dif_type": 0, 00:21:27.859 "dif_is_head_of_md": false, 00:21:27.859 "dif_pi_format": 0 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "bdev_wait_for_examine" 00:21:27.859 } 00:21:27.859 ] 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "subsystem": "nbd", 00:21:27.859 "config": [] 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "subsystem": "scheduler", 00:21:27.859 "config": [ 00:21:27.859 { 00:21:27.859 "method": "framework_set_scheduler", 00:21:27.859 "params": { 00:21:27.859 "name": "static" 00:21:27.859 } 00:21:27.859 } 00:21:27.859 ] 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "subsystem": "nvmf", 00:21:27.859 "config": [ 00:21:27.859 { 00:21:27.859 "method": "nvmf_set_config", 00:21:27.859 "params": { 00:21:27.859 "discovery_filter": "match_any", 00:21:27.859 "admin_cmd_passthru": { 00:21:27.859 "identify_ctrlr": false 00:21:27.859 } 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "nvmf_set_max_subsystems", 00:21:27.859 "params": { 00:21:27.859 "max_subsystems": 1024 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "nvmf_set_crdt", 00:21:27.859 "params": { 00:21:27.859 "crdt1": 0, 00:21:27.859 "crdt2": 0, 00:21:27.859 "crdt3": 0 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "nvmf_create_transport", 00:21:27.859 "params": { 00:21:27.859 "trtype": "TCP", 00:21:27.859 "max_queue_depth": 128, 00:21:27.859 "max_io_qpairs_per_ctrlr": 127, 00:21:27.859 "in_capsule_data_size": 4096, 00:21:27.859 "max_io_size": 131072, 00:21:27.859 "io_unit_size": 131072, 00:21:27.859 "max_aq_depth": 128, 00:21:27.859 "num_shared_buffers": 511, 00:21:27.859 "buf_cache_size": 4294967295, 00:21:27.859 "dif_insert_or_strip": false, 00:21:27.859 "zcopy": false, 00:21:27.859 "c2h_success": false, 00:21:27.859 "sock_priority": 0, 00:21:27.859 "abort_timeout_sec": 1, 00:21:27.859 "ack_timeout": 0, 00:21:27.859 "data_wr_pool_size": 0 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "nvmf_create_subsystem", 00:21:27.859 "params": { 00:21:27.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.859 "allow_any_host": false, 00:21:27.859 "serial_number": "SPDK00000000000001", 00:21:27.859 "model_number": "SPDK bdev Controller", 00:21:27.859 "max_namespaces": 10, 00:21:27.859 "min_cntlid": 1, 00:21:27.859 "max_cntlid": 65519, 00:21:27.859 "ana_reporting": false 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "nvmf_subsystem_add_host", 00:21:27.859 "params": { 00:21:27.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.859 "host": "nqn.2016-06.io.spdk:host1", 00:21:27.859 "psk": "/tmp/tmp.bT2ZLoSqIU" 00:21:27.859 } 00:21:27.859 }, 00:21:27.859 { 00:21:27.859 "method": "nvmf_subsystem_add_ns", 00:21:27.859 "params": { 00:21:27.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.859 "namespace": { 00:21:27.859 "nsid": 1, 00:21:27.859 "bdev_name": "malloc0", 00:21:27.859 "nguid": "FA78BCC5F6BB4BA885B45A256FA01F03", 00:21:27.859 "uuid": "fa78bcc5-f6bb-4ba8-85b4-5a256fa01f03", 00:21:27.860 "no_auto_visible": false 00:21:27.860 } 00:21:27.860 } 00:21:27.860 }, 00:21:27.860 { 00:21:27.860 "method": "nvmf_subsystem_add_listener", 00:21:27.860 "params": { 00:21:27.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.860 "listen_address": { 00:21:27.860 "trtype": "TCP", 00:21:27.860 "adrfam": "IPv4", 00:21:27.860 "traddr": "10.0.0.2", 00:21:27.860 "trsvcid": "4420" 00:21:27.860 }, 00:21:27.860 "secure_channel": true 00:21:27.860 } 00:21:27.860 } 00:21:27.860 ] 00:21:27.860 } 00:21:27.860 ] 00:21:27.860 }' 00:21:27.860 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:28.119 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:28.119 "subsystems": [ 00:21:28.119 { 00:21:28.119 "subsystem": "keyring", 00:21:28.119 "config": [] 00:21:28.119 }, 00:21:28.119 { 00:21:28.119 "subsystem": "iobuf", 00:21:28.119 "config": [ 00:21:28.119 { 00:21:28.119 "method": "iobuf_set_options", 00:21:28.119 "params": { 00:21:28.119 "small_pool_count": 8192, 00:21:28.119 "large_pool_count": 1024, 00:21:28.119 "small_bufsize": 8192, 00:21:28.119 "large_bufsize": 135168 00:21:28.119 } 00:21:28.119 } 00:21:28.119 ] 00:21:28.119 }, 00:21:28.119 { 00:21:28.119 "subsystem": "sock", 00:21:28.119 "config": [ 00:21:28.119 { 00:21:28.119 "method": "sock_set_default_impl", 00:21:28.119 "params": { 00:21:28.119 "impl_name": "posix" 00:21:28.119 } 00:21:28.119 }, 00:21:28.119 { 00:21:28.119 "method": "sock_impl_set_options", 00:21:28.119 "params": { 00:21:28.119 "impl_name": "ssl", 00:21:28.119 "recv_buf_size": 4096, 00:21:28.119 "send_buf_size": 4096, 00:21:28.119 "enable_recv_pipe": true, 00:21:28.119 "enable_quickack": false, 00:21:28.119 "enable_placement_id": 0, 00:21:28.119 "enable_zerocopy_send_server": true, 00:21:28.119 "enable_zerocopy_send_client": false, 00:21:28.119 "zerocopy_threshold": 0, 00:21:28.119 "tls_version": 0, 00:21:28.119 "enable_ktls": false 00:21:28.119 } 00:21:28.119 }, 00:21:28.119 { 00:21:28.119 "method": "sock_impl_set_options", 00:21:28.119 "params": { 00:21:28.119 "impl_name": "posix", 00:21:28.119 "recv_buf_size": 2097152, 00:21:28.119 "send_buf_size": 2097152, 00:21:28.119 "enable_recv_pipe": true, 00:21:28.119 "enable_quickack": false, 00:21:28.119 "enable_placement_id": 0, 00:21:28.119 "enable_zerocopy_send_server": true, 00:21:28.119 "enable_zerocopy_send_client": false, 00:21:28.119 "zerocopy_threshold": 0, 00:21:28.119 "tls_version": 0, 00:21:28.119 "enable_ktls": false 00:21:28.119 } 00:21:28.119 } 00:21:28.119 ] 00:21:28.119 }, 00:21:28.119 { 00:21:28.119 "subsystem": "vmd", 00:21:28.119 "config": [] 00:21:28.119 }, 00:21:28.119 { 00:21:28.119 "subsystem": "accel", 00:21:28.119 "config": [ 00:21:28.119 { 00:21:28.119 "method": "accel_set_options", 00:21:28.119 "params": { 00:21:28.119 "small_cache_size": 128, 00:21:28.119 "large_cache_size": 16, 00:21:28.119 "task_count": 2048, 00:21:28.119 "sequence_count": 2048, 00:21:28.119 "buf_count": 2048 00:21:28.119 } 00:21:28.119 } 00:21:28.119 ] 00:21:28.119 }, 00:21:28.119 { 00:21:28.119 "subsystem": "bdev", 00:21:28.119 "config": [ 00:21:28.119 { 00:21:28.119 "method": "bdev_set_options", 00:21:28.119 "params": { 00:21:28.119 "bdev_io_pool_size": 65535, 00:21:28.119 "bdev_io_cache_size": 256, 00:21:28.119 "bdev_auto_examine": true, 00:21:28.119 "iobuf_small_cache_size": 128, 00:21:28.119 "iobuf_large_cache_size": 16 00:21:28.119 } 00:21:28.119 }, 00:21:28.119 { 00:21:28.119 "method": "bdev_raid_set_options", 00:21:28.119 "params": { 00:21:28.119 "process_window_size_kb": 1024, 00:21:28.119 "process_max_bandwidth_mb_sec": 0 00:21:28.119 } 00:21:28.119 }, 00:21:28.119 { 00:21:28.119 "method": "bdev_iscsi_set_options", 00:21:28.119 "params": { 00:21:28.119 "timeout_sec": 30 00:21:28.119 } 00:21:28.119 }, 00:21:28.119 { 00:21:28.119 "method": "bdev_nvme_set_options", 00:21:28.119 "params": { 00:21:28.119 "action_on_timeout": "none", 00:21:28.119 "timeout_us": 0, 00:21:28.119 "timeout_admin_us": 0, 00:21:28.119 "keep_alive_timeout_ms": 10000, 00:21:28.119 "arbitration_burst": 0, 00:21:28.119 "low_priority_weight": 0, 00:21:28.119 "medium_priority_weight": 0, 00:21:28.119 "high_priority_weight": 0, 00:21:28.119 "nvme_adminq_poll_period_us": 10000, 00:21:28.119 "nvme_ioq_poll_period_us": 0, 00:21:28.119 "io_queue_requests": 512, 00:21:28.119 "delay_cmd_submit": true, 00:21:28.119 "transport_retry_count": 4, 00:21:28.119 "bdev_retry_count": 3, 00:21:28.119 "transport_ack_timeout": 0, 00:21:28.119 "ctrlr_loss_timeout_sec": 0, 00:21:28.119 "reconnect_delay_sec": 0, 00:21:28.119 "fast_io_fail_timeout_sec": 0, 00:21:28.119 "disable_auto_failback": false, 00:21:28.119 "generate_uuids": false, 00:21:28.119 "transport_tos": 0, 00:21:28.119 "nvme_error_stat": false, 00:21:28.119 "rdma_srq_size": 0, 00:21:28.119 "io_path_stat": false, 00:21:28.119 "allow_accel_sequence": false, 00:21:28.120 "rdma_max_cq_size": 0, 00:21:28.120 "rdma_cm_event_timeout_ms": 0, 00:21:28.120 "dhchap_digests": [ 00:21:28.120 "sha256", 00:21:28.120 "sha384", 00:21:28.120 "sha512" 00:21:28.120 ], 00:21:28.120 "dhchap_dhgroups": [ 00:21:28.120 "null", 00:21:28.120 "ffdhe2048", 00:21:28.120 "ffdhe3072", 00:21:28.120 "ffdhe4096", 00:21:28.120 "ffdhe6144", 00:21:28.120 "ffdhe8192" 00:21:28.120 ] 00:21:28.120 } 00:21:28.120 }, 00:21:28.120 { 00:21:28.120 "method": "bdev_nvme_attach_controller", 00:21:28.120 "params": { 00:21:28.120 "name": "TLSTEST", 00:21:28.120 "trtype": "TCP", 00:21:28.120 "adrfam": "IPv4", 00:21:28.120 "traddr": "10.0.0.2", 00:21:28.120 "trsvcid": "4420", 00:21:28.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.120 "prchk_reftag": false, 00:21:28.120 "prchk_guard": false, 00:21:28.120 "ctrlr_loss_timeout_sec": 0, 00:21:28.120 "reconnect_delay_sec": 0, 00:21:28.120 "fast_io_fail_timeout_sec": 0, 00:21:28.120 "psk": "/tmp/tmp.bT2ZLoSqIU", 00:21:28.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.120 "hdgst": false, 00:21:28.120 "ddgst": false 00:21:28.120 } 00:21:28.120 }, 00:21:28.120 { 00:21:28.120 "method": "bdev_nvme_set_hotplug", 00:21:28.120 "params": { 00:21:28.120 "period_us": 100000, 00:21:28.120 "enable": false 00:21:28.120 } 00:21:28.120 }, 00:21:28.120 { 00:21:28.120 "method": "bdev_wait_for_examine" 00:21:28.120 } 00:21:28.120 ] 00:21:28.120 }, 00:21:28.120 { 00:21:28.120 "subsystem": "nbd", 00:21:28.120 "config": [] 00:21:28.120 } 00:21:28.120 ] 00:21:28.120 }' 00:21:28.120 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2546021 00:21:28.120 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2546021 ']' 00:21:28.120 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2546021 00:21:28.120 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.120 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.120 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2546021 00:21:28.120 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:28.120 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:28.120 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2546021' 00:21:28.120 killing process with pid 2546021 00:21:28.120 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2546021 00:21:28.120 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.120 00:21:28.120 Latency(us) 00:21:28.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.120 =================================================================================================================== 00:21:28.120 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:28.120 [2024-07-24 18:58:13.004624] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:28.120 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2546021 00:21:28.378 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2545491 00:21:28.378 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2545491 ']' 00:21:28.378 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2545491 00:21:28.378 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.378 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.378 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2545491 00:21:28.638 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:28.638 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:28.638 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2545491' 00:21:28.638 killing process with pid 2545491 00:21:28.638 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2545491 00:21:28.638 [2024-07-24 18:58:13.414510] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:28.638 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2545491 00:21:28.897 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:28.897 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.897 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:28.897 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.897 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:28.897 "subsystems": [ 00:21:28.897 { 00:21:28.897 "subsystem": "keyring", 00:21:28.897 "config": [] 00:21:28.897 }, 00:21:28.897 { 00:21:28.897 "subsystem": "iobuf", 00:21:28.897 "config": [ 00:21:28.897 { 00:21:28.897 "method": "iobuf_set_options", 00:21:28.897 "params": { 00:21:28.897 "small_pool_count": 8192, 00:21:28.897 "large_pool_count": 1024, 00:21:28.897 "small_bufsize": 8192, 00:21:28.897 "large_bufsize": 135168 00:21:28.897 } 00:21:28.897 } 00:21:28.897 ] 00:21:28.897 }, 00:21:28.897 { 00:21:28.897 "subsystem": "sock", 00:21:28.897 "config": [ 00:21:28.897 { 00:21:28.897 "method": "sock_set_default_impl", 00:21:28.897 "params": { 00:21:28.897 "impl_name": "posix" 00:21:28.897 } 00:21:28.897 }, 00:21:28.897 { 00:21:28.897 "method": "sock_impl_set_options", 00:21:28.897 "params": { 00:21:28.897 "impl_name": "ssl", 00:21:28.897 "recv_buf_size": 4096, 00:21:28.897 "send_buf_size": 4096, 00:21:28.897 "enable_recv_pipe": true, 00:21:28.897 "enable_quickack": false, 00:21:28.897 "enable_placement_id": 0, 00:21:28.897 "enable_zerocopy_send_server": true, 00:21:28.897 "enable_zerocopy_send_client": false, 00:21:28.897 "zerocopy_threshold": 0, 00:21:28.897 "tls_version": 0, 00:21:28.897 "enable_ktls": false 00:21:28.897 } 00:21:28.897 }, 00:21:28.897 { 00:21:28.897 "method": "sock_impl_set_options", 00:21:28.897 "params": { 00:21:28.897 "impl_name": "posix", 00:21:28.897 "recv_buf_size": 2097152, 00:21:28.897 "send_buf_size": 2097152, 00:21:28.897 "enable_recv_pipe": true, 00:21:28.897 "enable_quickack": false, 00:21:28.897 "enable_placement_id": 0, 00:21:28.897 "enable_zerocopy_send_server": true, 00:21:28.897 "enable_zerocopy_send_client": false, 00:21:28.897 "zerocopy_threshold": 0, 00:21:28.897 "tls_version": 0, 00:21:28.897 "enable_ktls": false 00:21:28.897 } 00:21:28.897 } 00:21:28.897 ] 00:21:28.897 }, 00:21:28.897 { 00:21:28.897 "subsystem": "vmd", 00:21:28.897 "config": [] 00:21:28.897 }, 00:21:28.897 { 00:21:28.897 "subsystem": "accel", 00:21:28.897 "config": [ 00:21:28.897 { 00:21:28.897 "method": "accel_set_options", 00:21:28.897 "params": { 00:21:28.897 "small_cache_size": 128, 00:21:28.897 "large_cache_size": 16, 00:21:28.897 "task_count": 2048, 00:21:28.897 "sequence_count": 2048, 00:21:28.897 "buf_count": 2048 00:21:28.897 } 00:21:28.897 } 00:21:28.897 ] 00:21:28.897 }, 00:21:28.897 { 00:21:28.897 "subsystem": "bdev", 00:21:28.897 "config": [ 00:21:28.897 { 00:21:28.897 "method": "bdev_set_options", 00:21:28.897 "params": { 00:21:28.897 "bdev_io_pool_size": 65535, 00:21:28.897 "bdev_io_cache_size": 256, 00:21:28.897 "bdev_auto_examine": true, 00:21:28.897 "iobuf_small_cache_size": 128, 00:21:28.897 "iobuf_large_cache_size": 16 00:21:28.897 } 00:21:28.897 }, 00:21:28.897 { 00:21:28.897 "method": "bdev_raid_set_options", 00:21:28.897 "params": { 00:21:28.897 "process_window_size_kb": 1024, 00:21:28.897 "process_max_bandwidth_mb_sec": 0 00:21:28.897 } 00:21:28.897 }, 00:21:28.897 { 00:21:28.897 "method": "bdev_iscsi_set_options", 00:21:28.897 "params": { 00:21:28.897 "timeout_sec": 30 00:21:28.897 } 00:21:28.897 }, 00:21:28.897 { 00:21:28.897 "method": "bdev_nvme_set_options", 00:21:28.897 "params": { 00:21:28.897 "action_on_timeout": "none", 00:21:28.897 "timeout_us": 0, 00:21:28.897 "timeout_admin_us": 0, 00:21:28.897 "keep_alive_timeout_ms": 10000, 00:21:28.897 "arbitration_burst": 0, 00:21:28.897 "low_priority_weight": 0, 00:21:28.897 "medium_priority_weight": 0, 00:21:28.897 "high_priority_weight": 0, 00:21:28.897 "nvme_adminq_poll_period_us": 10000, 00:21:28.897 "nvme_ioq_poll_period_us": 0, 00:21:28.897 "io_queue_requests": 0, 00:21:28.897 "delay_cmd_submit": true, 00:21:28.897 "transport_retry_count": 4, 00:21:28.897 "bdev_retry_count": 3, 00:21:28.897 "transport_ack_timeout": 0, 00:21:28.897 "ctrlr_loss_timeout_sec": 0, 00:21:28.897 "reconnect_delay_sec": 0, 00:21:28.897 "fast_io_fail_timeout_sec": 0, 00:21:28.897 "disable_auto_failback": false, 00:21:28.897 "generate_uuids": false, 00:21:28.897 "transport_tos": 0, 00:21:28.897 "nvme_error_stat": false, 00:21:28.897 "rdma_srq_size": 0, 00:21:28.897 "io_path_stat": false, 00:21:28.897 "allow_accel_sequence": false, 00:21:28.897 "rdma_max_cq_size": 0, 00:21:28.897 "rdma_cm_event_timeout_ms": 0, 00:21:28.897 "dhchap_digests": [ 00:21:28.897 "sha256", 00:21:28.897 "sha384", 00:21:28.897 "sha512" 00:21:28.897 ], 00:21:28.897 "dhchap_dhgroups": [ 00:21:28.897 "null", 00:21:28.897 "ffdhe2048", 00:21:28.898 "ffdhe3072", 00:21:28.898 "ffdhe4096", 00:21:28.898 "ffdhe6144", 00:21:28.898 "ffdhe8192" 00:21:28.898 ] 00:21:28.898 } 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "method": "bdev_nvme_set_hotplug", 00:21:28.898 "params": { 00:21:28.898 "period_us": 100000, 00:21:28.898 "enable": false 00:21:28.898 } 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "method": "bdev_malloc_create", 00:21:28.898 "params": { 00:21:28.898 "name": "malloc0", 00:21:28.898 "num_blocks": 8192, 00:21:28.898 "block_size": 4096, 00:21:28.898 "physical_block_size": 4096, 00:21:28.898 "uuid": "fa78bcc5-f6bb-4ba8-85b4-5a256fa01f03", 00:21:28.898 "optimal_io_boundary": 0, 00:21:28.898 "md_size": 0, 00:21:28.898 "dif_type": 0, 00:21:28.898 "dif_is_head_of_md": false, 00:21:28.898 "dif_pi_format": 0 00:21:28.898 } 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "method": "bdev_wait_for_examine" 00:21:28.898 } 00:21:28.898 ] 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "subsystem": "nbd", 00:21:28.898 "config": [] 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "subsystem": "scheduler", 00:21:28.898 "config": [ 00:21:28.898 { 00:21:28.898 "method": "framework_set_scheduler", 00:21:28.898 "params": { 00:21:28.898 "name": "static" 00:21:28.898 } 00:21:28.898 } 00:21:28.898 ] 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "subsystem": "nvmf", 00:21:28.898 "config": [ 00:21:28.898 { 00:21:28.898 "method": "nvmf_set_config", 00:21:28.898 "params": { 00:21:28.898 "discovery_filter": "match_any", 00:21:28.898 "admin_cmd_passthru": { 00:21:28.898 "identify_ctrlr": false 00:21:28.898 } 00:21:28.898 } 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "method": "nvmf_set_max_subsystems", 00:21:28.898 "params": { 00:21:28.898 "max_subsystems": 1024 00:21:28.898 } 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "method": "nvmf_set_crdt", 00:21:28.898 "params": { 00:21:28.898 "crdt1": 0, 00:21:28.898 "crdt2": 0, 00:21:28.898 "crdt3": 0 00:21:28.898 } 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "method": "nvmf_create_transport", 00:21:28.898 "params": { 00:21:28.898 "trtype": "TCP", 00:21:28.898 "max_queue_depth": 128, 00:21:28.898 "max_io_qpairs_per_ctrlr": 127, 00:21:28.898 "in_capsule_data_size": 4096, 00:21:28.898 "max_io_size": 131072, 00:21:28.898 "io_unit_size": 131072, 00:21:28.898 "max_aq_depth": 128, 00:21:28.898 "num_shared_buffers": 511, 00:21:28.898 "buf_cache_size": 4294967295, 00:21:28.898 "dif_insert_or_strip": false, 00:21:28.898 "zcopy": false, 00:21:28.898 "c2h_success": false, 00:21:28.898 "sock_priority": 0, 00:21:28.898 "abort_timeout_sec": 1, 00:21:28.898 "ack_timeout": 0, 00:21:28.898 "data_wr_pool_size": 0 00:21:28.898 } 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "method": "nvmf_create_subsystem", 00:21:28.898 "params": { 00:21:28.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.898 "allow_any_host": false, 00:21:28.898 "serial_number": "SPDK00000000000001", 00:21:28.898 "model_number": "SPDK bdev Controller", 00:21:28.898 "max_namespaces": 10, 00:21:28.898 "min_cntlid": 1, 00:21:28.898 "max_cntlid": 65519, 00:21:28.898 "ana_reporting": false 00:21:28.898 } 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "method": "nvmf_subsystem_add_host", 00:21:28.898 "params": { 00:21:28.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.898 "host": "nqn.2016-06.io.spdk:host1", 00:21:28.898 "psk": "/tmp/tmp.bT2ZLoSqIU" 00:21:28.898 } 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "method": "nvmf_subsystem_add_ns", 00:21:28.898 "params": { 00:21:28.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.898 "namespace": { 00:21:28.898 "nsid": 1, 00:21:28.898 "bdev_name": "malloc0", 00:21:28.898 "nguid": "FA78BCC5F6BB4BA885B45A256FA01F03", 00:21:28.898 "uuid": "fa78bcc5-f6bb-4ba8-85b4-5a256fa01f03", 00:21:28.898 "no_auto_visible": false 00:21:28.898 } 00:21:28.898 } 00:21:28.898 }, 00:21:28.898 { 00:21:28.898 "method": "nvmf_subsystem_add_listener", 00:21:28.898 "params": { 00:21:28.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.898 "listen_address": { 00:21:28.898 "trtype": "TCP", 00:21:28.898 "adrfam": "IPv4", 00:21:28.898 "traddr": "10.0.0.2", 00:21:28.898 "trsvcid": "4420" 00:21:28.898 }, 00:21:28.898 "secure_channel": true 00:21:28.898 } 00:21:28.898 } 00:21:28.898 ] 00:21:28.898 } 00:21:28.898 ] 00:21:28.898 }' 00:21:28.898 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2546559 00:21:28.898 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:28.898 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2546559 00:21:28.898 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2546559 ']' 00:21:28.898 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.898 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.898 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.898 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.898 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.898 [2024-07-24 18:58:13.789154] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:28.898 [2024-07-24 18:58:13.789217] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.898 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.898 [2024-07-24 18:58:13.876100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.156 [2024-07-24 18:58:13.979643] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.156 [2024-07-24 18:58:13.979689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.156 [2024-07-24 18:58:13.979702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.156 [2024-07-24 18:58:13.979712] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.156 [2024-07-24 18:58:13.979721] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.156 [2024-07-24 18:58:13.979790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.415 [2024-07-24 18:58:14.198271] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.415 [2024-07-24 18:58:14.226061] tcp.c:3771:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:29.415 [2024-07-24 18:58:14.242140] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:29.415 [2024-07-24 18:58:14.242373] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2546767 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2546767 /var/tmp/bdevperf.sock 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2546767 ']' 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.984 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:29.984 "subsystems": [ 00:21:29.984 { 00:21:29.984 "subsystem": "keyring", 00:21:29.984 "config": [] 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "subsystem": "iobuf", 00:21:29.984 "config": [ 00:21:29.984 { 00:21:29.984 "method": "iobuf_set_options", 00:21:29.984 "params": { 00:21:29.984 "small_pool_count": 8192, 00:21:29.984 "large_pool_count": 1024, 00:21:29.984 "small_bufsize": 8192, 00:21:29.984 "large_bufsize": 135168 00:21:29.984 } 00:21:29.984 } 00:21:29.984 ] 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "subsystem": "sock", 00:21:29.984 "config": [ 00:21:29.984 { 00:21:29.984 "method": "sock_set_default_impl", 00:21:29.984 "params": { 00:21:29.984 "impl_name": "posix" 00:21:29.984 } 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "method": "sock_impl_set_options", 00:21:29.984 "params": { 00:21:29.984 "impl_name": "ssl", 00:21:29.984 "recv_buf_size": 4096, 00:21:29.984 "send_buf_size": 4096, 00:21:29.984 "enable_recv_pipe": true, 00:21:29.984 "enable_quickack": false, 00:21:29.984 "enable_placement_id": 0, 00:21:29.984 "enable_zerocopy_send_server": true, 00:21:29.984 "enable_zerocopy_send_client": false, 00:21:29.984 "zerocopy_threshold": 0, 00:21:29.984 "tls_version": 0, 00:21:29.984 "enable_ktls": false 00:21:29.984 } 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "method": "sock_impl_set_options", 00:21:29.984 "params": { 00:21:29.984 "impl_name": "posix", 00:21:29.984 "recv_buf_size": 2097152, 00:21:29.984 "send_buf_size": 2097152, 00:21:29.984 "enable_recv_pipe": true, 00:21:29.984 "enable_quickack": false, 00:21:29.984 "enable_placement_id": 0, 00:21:29.984 "enable_zerocopy_send_server": true, 00:21:29.984 "enable_zerocopy_send_client": false, 00:21:29.984 "zerocopy_threshold": 0, 00:21:29.984 "tls_version": 0, 00:21:29.984 "enable_ktls": false 00:21:29.984 } 00:21:29.984 } 00:21:29.984 ] 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "subsystem": "vmd", 00:21:29.984 "config": [] 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "subsystem": "accel", 00:21:29.984 "config": [ 00:21:29.984 { 00:21:29.984 "method": "accel_set_options", 00:21:29.984 "params": { 00:21:29.984 "small_cache_size": 128, 00:21:29.984 "large_cache_size": 16, 00:21:29.984 "task_count": 2048, 00:21:29.984 "sequence_count": 2048, 00:21:29.984 "buf_count": 2048 00:21:29.984 } 00:21:29.984 } 00:21:29.984 ] 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "subsystem": "bdev", 00:21:29.984 "config": [ 00:21:29.984 { 00:21:29.984 "method": "bdev_set_options", 00:21:29.984 "params": { 00:21:29.984 "bdev_io_pool_size": 65535, 00:21:29.984 "bdev_io_cache_size": 256, 00:21:29.984 "bdev_auto_examine": true, 00:21:29.984 "iobuf_small_cache_size": 128, 00:21:29.984 "iobuf_large_cache_size": 16 00:21:29.984 } 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "method": "bdev_raid_set_options", 00:21:29.984 "params": { 00:21:29.984 "process_window_size_kb": 1024, 00:21:29.984 "process_max_bandwidth_mb_sec": 0 00:21:29.984 } 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "method": "bdev_iscsi_set_options", 00:21:29.984 "params": { 00:21:29.984 "timeout_sec": 30 00:21:29.984 } 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "method": "bdev_nvme_set_options", 00:21:29.984 "params": { 00:21:29.984 "action_on_timeout": "none", 00:21:29.984 "timeout_us": 0, 00:21:29.984 "timeout_admin_us": 0, 00:21:29.984 "keep_alive_timeout_ms": 10000, 00:21:29.984 "arbitration_burst": 0, 00:21:29.984 "low_priority_weight": 0, 00:21:29.984 "medium_priority_weight": 0, 00:21:29.984 "high_priority_weight": 0, 00:21:29.984 "nvme_adminq_poll_period_us": 10000, 00:21:29.984 "nvme_ioq_poll_period_us": 0, 00:21:29.984 "io_queue_requests": 512, 00:21:29.984 "delay_cmd_submit": true, 00:21:29.984 "transport_retry_count": 4, 00:21:29.984 "bdev_retry_count": 3, 00:21:29.984 "transport_ack_timeout": 0, 00:21:29.984 "ctrlr_loss_timeout_sec": 0, 00:21:29.984 "reconnect_delay_sec": 0, 00:21:29.984 "fast_io_fail_timeout_sec": 0, 00:21:29.984 "disable_auto_failback": false, 00:21:29.984 "generate_uuids": false, 00:21:29.984 "transport_tos": 0, 00:21:29.984 "nvme_error_stat": false, 00:21:29.984 "rdma_srq_size": 0, 00:21:29.984 "io_path_stat": false, 00:21:29.984 "allow_accel_sequence": false, 00:21:29.984 "rdma_max_cq_size": 0, 00:21:29.984 "rdma_cm_event_timeout_ms": 0, 00:21:29.984 "dhchap_digests": [ 00:21:29.984 "sha256", 00:21:29.984 "sha384", 00:21:29.984 "sha512" 00:21:29.984 ], 00:21:29.984 "dhchap_dhgroups": [ 00:21:29.984 "null", 00:21:29.984 "ffdhe2048", 00:21:29.984 "ffdhe3072", 00:21:29.984 "ffdhe4096", 00:21:29.984 "ffdhe6144", 00:21:29.984 "ffdhe8192" 00:21:29.984 ] 00:21:29.984 } 00:21:29.984 }, 00:21:29.984 { 00:21:29.984 "method": "bdev_nvme_attach_controller", 00:21:29.984 "params": { 00:21:29.984 "name": "TLSTEST", 00:21:29.984 "trtype": "TCP", 00:21:29.984 "adrfam": "IPv4", 00:21:29.984 "traddr": "10.0.0.2", 00:21:29.984 "trsvcid": "4420", 00:21:29.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.984 "prchk_reftag": false, 00:21:29.984 "prchk_guard": false, 00:21:29.984 "ctrlr_loss_timeout_sec": 0, 00:21:29.984 "reconnect_delay_sec": 0, 00:21:29.985 "fast_io_fail_timeout_sec": 0, 00:21:29.985 "psk": "/tmp/tmp.bT2ZLoSqIU", 00:21:29.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.985 "hdgst": false, 00:21:29.985 "ddgst": false 00:21:29.985 } 00:21:29.985 }, 00:21:29.985 { 00:21:29.985 "method": "bdev_nvme_set_hotplug", 00:21:29.985 "params": { 00:21:29.985 "period_us": 100000, 00:21:29.985 "enable": false 00:21:29.985 } 00:21:29.985 }, 00:21:29.985 { 00:21:29.985 "method": "bdev_wait_for_examine" 00:21:29.985 } 00:21:29.985 ] 00:21:29.985 }, 00:21:29.985 { 00:21:29.985 "subsystem": "nbd", 00:21:29.985 "config": [] 00:21:29.985 } 00:21:29.985 ] 00:21:29.985 }' 00:21:29.985 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.985 [2024-07-24 18:58:14.818009] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:29.985 [2024-07-24 18:58:14.818070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546767 ] 00:21:29.985 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.985 [2024-07-24 18:58:14.932711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.243 [2024-07-24 18:58:15.080546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.502 [2024-07-24 18:58:15.288563] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.502 [2024-07-24 18:58:15.288733] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:30.761 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.761 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:30.761 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:31.021 Running I/O for 10 seconds... 00:21:41.001 00:21:41.001 Latency(us) 00:21:41.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.001 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:41.001 Verification LBA range: start 0x0 length 0x2000 00:21:41.001 TLSTESTn1 : 10.02 2828.90 11.05 0.00 0.00 45124.73 9949.56 59578.18 00:21:41.001 =================================================================================================================== 00:21:41.001 Total : 2828.90 11.05 0.00 0.00 45124.73 9949.56 59578.18 00:21:41.001 0 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2546767 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2546767 ']' 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2546767 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2546767 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2546767' 00:21:41.001 killing process with pid 2546767 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2546767 00:21:41.001 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.001 00:21:41.001 Latency(us) 00:21:41.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.001 =================================================================================================================== 00:21:41.001 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.001 [2024-07-24 18:58:25.936700] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:41.001 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2546767 00:21:41.260 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2546559 00:21:41.260 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2546559 ']' 00:21:41.260 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2546559 00:21:41.260 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:41.260 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.260 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2546559 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2546559' 00:21:41.519 killing process with pid 2546559 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2546559 00:21:41.519 [2024-07-24 18:58:26.277476] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2546559 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2548688 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2548688 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2548688 ']' 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.519 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.778 [2024-07-24 18:58:26.571482] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:41.778 [2024-07-24 18:58:26.571545] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.778 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.778 [2024-07-24 18:58:26.650708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.778 [2024-07-24 18:58:26.738811] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.778 [2024-07-24 18:58:26.738853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.778 [2024-07-24 18:58:26.738864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.778 [2024-07-24 18:58:26.738873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.778 [2024-07-24 18:58:26.738880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.778 [2024-07-24 18:58:26.738902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.036 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.036 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:42.036 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.036 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.036 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.036 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.036 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.bT2ZLoSqIU 00:21:42.036 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.bT2ZLoSqIU 00:21:42.036 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:42.294 [2024-07-24 18:58:27.102716] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.294 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:42.551 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:42.809 [2024-07-24 18:58:27.592015] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:42.809 [2024-07-24 18:58:27.592235] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.809 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:43.066 malloc0 00:21:43.066 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:43.323 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bT2ZLoSqIU 00:21:43.581 [2024-07-24 18:58:28.355329] tcp.c:3771:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:43.581 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2549148 00:21:43.581 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:43.581 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.581 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2549148 /var/tmp/bdevperf.sock 00:21:43.581 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2549148 ']' 00:21:43.581 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.581 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.581 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.581 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.581 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.581 [2024-07-24 18:58:28.423483] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:43.581 [2024-07-24 18:58:28.423542] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549148 ] 00:21:43.581 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.581 [2024-07-24 18:58:28.504594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.838 [2024-07-24 18:58:28.609022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.406 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.406 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:44.406 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bT2ZLoSqIU 00:21:44.702 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:44.960 [2024-07-24 18:58:29.861057] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.960 nvme0n1 00:21:45.218 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.218 Running I/O for 1 seconds... 00:21:46.152 00:21:46.152 Latency(us) 00:21:46.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.152 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:46.152 Verification LBA range: start 0x0 length 0x2000 00:21:46.153 nvme0n1 : 1.03 3595.44 14.04 0.00 0.00 35094.33 8877.15 64821.06 00:21:46.153 =================================================================================================================== 00:21:46.153 Total : 3595.44 14.04 0.00 0.00 35094.33 8877.15 64821.06 00:21:46.153 0 00:21:46.153 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2549148 00:21:46.153 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2549148 ']' 00:21:46.153 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2549148 00:21:46.153 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:46.153 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.153 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2549148 00:21:46.411 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:46.411 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:46.411 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2549148' 00:21:46.411 killing process with pid 2549148 00:21:46.411 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2549148 00:21:46.411 Received shutdown signal, test time was about 1.000000 seconds 00:21:46.411 00:21:46.411 Latency(us) 00:21:46.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.411 =================================================================================================================== 00:21:46.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.411 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2549148 00:21:46.411 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2548688 00:21:46.411 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2548688 ']' 00:21:46.411 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2548688 00:21:46.411 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:46.670 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.670 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2548688 00:21:46.670 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:46.670 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:46.670 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2548688' 00:21:46.670 killing process with pid 2548688 00:21:46.670 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2548688 00:21:46.670 [2024-07-24 18:58:31.467579] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:46.670 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2548688 00:21:46.670 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2549764 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2549764 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2549764 ']' 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.929 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.929 [2024-07-24 18:58:31.737261] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:46.929 [2024-07-24 18:58:31.737317] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.929 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.929 [2024-07-24 18:58:31.821168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.929 [2024-07-24 18:58:31.910987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.930 [2024-07-24 18:58:31.911032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.930 [2024-07-24 18:58:31.911042] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.930 [2024-07-24 18:58:31.911051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.930 [2024-07-24 18:58:31.911058] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.930 [2024-07-24 18:58:31.911080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.189 [2024-07-24 18:58:32.055206] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.189 malloc0 00:21:47.189 [2024-07-24 18:58:32.084500] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:47.189 [2024-07-24 18:58:32.091830] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2549784 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2549784 /var/tmp/bdevperf.sock 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2549784 ']' 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.189 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.189 [2024-07-24 18:58:32.165261] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:47.189 [2024-07-24 18:58:32.165314] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549784 ] 00:21:47.189 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.448 [2024-07-24 18:58:32.245753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.448 [2024-07-24 18:58:32.346726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.015 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.015 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:48.015 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bT2ZLoSqIU 00:21:48.274 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:48.535 [2024-07-24 18:58:33.479355] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.794 nvme0n1 00:21:48.794 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:48.794 Running I/O for 1 seconds... 00:21:49.729 00:21:49.729 Latency(us) 00:21:49.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.729 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:49.729 Verification LBA range: start 0x0 length 0x2000 00:21:49.729 nvme0n1 : 1.02 3631.90 14.19 0.00 0.00 34852.96 9055.88 38130.04 00:21:49.729 =================================================================================================================== 00:21:49.729 Total : 3631.90 14.19 0.00 0.00 34852.96 9055.88 38130.04 00:21:49.729 0 00:21:49.729 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:49.989 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.989 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.989 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.989 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:49.989 "subsystems": [ 00:21:49.989 { 00:21:49.989 "subsystem": "keyring", 00:21:49.989 "config": [ 00:21:49.989 { 00:21:49.989 "method": "keyring_file_add_key", 00:21:49.989 "params": { 00:21:49.989 "name": "key0", 00:21:49.989 "path": "/tmp/tmp.bT2ZLoSqIU" 00:21:49.989 } 00:21:49.989 } 00:21:49.989 ] 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "subsystem": "iobuf", 00:21:49.989 "config": [ 00:21:49.989 { 00:21:49.989 "method": "iobuf_set_options", 00:21:49.989 "params": { 00:21:49.989 "small_pool_count": 8192, 00:21:49.989 "large_pool_count": 1024, 00:21:49.989 "small_bufsize": 8192, 00:21:49.989 "large_bufsize": 135168 00:21:49.989 } 00:21:49.989 } 00:21:49.989 ] 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "subsystem": "sock", 00:21:49.989 "config": [ 00:21:49.989 { 00:21:49.989 "method": "sock_set_default_impl", 00:21:49.989 "params": { 00:21:49.989 "impl_name": "posix" 00:21:49.989 } 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "method": "sock_impl_set_options", 00:21:49.989 "params": { 00:21:49.989 "impl_name": "ssl", 00:21:49.989 "recv_buf_size": 4096, 00:21:49.989 "send_buf_size": 4096, 00:21:49.989 "enable_recv_pipe": true, 00:21:49.989 "enable_quickack": false, 00:21:49.989 "enable_placement_id": 0, 00:21:49.989 "enable_zerocopy_send_server": true, 00:21:49.989 "enable_zerocopy_send_client": false, 00:21:49.989 "zerocopy_threshold": 0, 00:21:49.989 "tls_version": 0, 00:21:49.989 "enable_ktls": false 00:21:49.989 } 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "method": "sock_impl_set_options", 00:21:49.989 "params": { 00:21:49.989 "impl_name": "posix", 00:21:49.989 "recv_buf_size": 2097152, 00:21:49.989 "send_buf_size": 2097152, 00:21:49.989 "enable_recv_pipe": true, 00:21:49.989 "enable_quickack": false, 00:21:49.989 "enable_placement_id": 0, 00:21:49.989 "enable_zerocopy_send_server": true, 00:21:49.989 "enable_zerocopy_send_client": false, 00:21:49.989 "zerocopy_threshold": 0, 00:21:49.989 "tls_version": 0, 00:21:49.989 "enable_ktls": false 00:21:49.989 } 00:21:49.989 } 00:21:49.989 ] 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "subsystem": "vmd", 00:21:49.989 "config": [] 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "subsystem": "accel", 00:21:49.989 "config": [ 00:21:49.989 { 00:21:49.989 "method": "accel_set_options", 00:21:49.989 "params": { 00:21:49.989 "small_cache_size": 128, 00:21:49.989 "large_cache_size": 16, 00:21:49.989 "task_count": 2048, 00:21:49.989 "sequence_count": 2048, 00:21:49.989 "buf_count": 2048 00:21:49.989 } 00:21:49.989 } 00:21:49.989 ] 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "subsystem": "bdev", 00:21:49.989 "config": [ 00:21:49.989 { 00:21:49.989 "method": "bdev_set_options", 00:21:49.989 "params": { 00:21:49.989 "bdev_io_pool_size": 65535, 00:21:49.989 "bdev_io_cache_size": 256, 00:21:49.989 "bdev_auto_examine": true, 00:21:49.989 "iobuf_small_cache_size": 128, 00:21:49.989 "iobuf_large_cache_size": 16 00:21:49.989 } 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "method": "bdev_raid_set_options", 00:21:49.989 "params": { 00:21:49.989 "process_window_size_kb": 1024, 00:21:49.989 "process_max_bandwidth_mb_sec": 0 00:21:49.989 } 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "method": "bdev_iscsi_set_options", 00:21:49.989 "params": { 00:21:49.989 "timeout_sec": 30 00:21:49.989 } 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "method": "bdev_nvme_set_options", 00:21:49.989 "params": { 00:21:49.989 "action_on_timeout": "none", 00:21:49.989 "timeout_us": 0, 00:21:49.989 "timeout_admin_us": 0, 00:21:49.989 "keep_alive_timeout_ms": 10000, 00:21:49.989 "arbitration_burst": 0, 00:21:49.989 "low_priority_weight": 0, 00:21:49.989 "medium_priority_weight": 0, 00:21:49.989 "high_priority_weight": 0, 00:21:49.989 "nvme_adminq_poll_period_us": 10000, 00:21:49.989 "nvme_ioq_poll_period_us": 0, 00:21:49.989 "io_queue_requests": 0, 00:21:49.989 "delay_cmd_submit": true, 00:21:49.989 "transport_retry_count": 4, 00:21:49.989 "bdev_retry_count": 3, 00:21:49.989 "transport_ack_timeout": 0, 00:21:49.989 "ctrlr_loss_timeout_sec": 0, 00:21:49.989 "reconnect_delay_sec": 0, 00:21:49.989 "fast_io_fail_timeout_sec": 0, 00:21:49.989 "disable_auto_failback": false, 00:21:49.989 "generate_uuids": false, 00:21:49.989 "transport_tos": 0, 00:21:49.989 "nvme_error_stat": false, 00:21:49.989 "rdma_srq_size": 0, 00:21:49.989 "io_path_stat": false, 00:21:49.989 "allow_accel_sequence": false, 00:21:49.989 "rdma_max_cq_size": 0, 00:21:49.989 "rdma_cm_event_timeout_ms": 0, 00:21:49.989 "dhchap_digests": [ 00:21:49.989 "sha256", 00:21:49.989 "sha384", 00:21:49.989 "sha512" 00:21:49.989 ], 00:21:49.989 "dhchap_dhgroups": [ 00:21:49.989 "null", 00:21:49.989 "ffdhe2048", 00:21:49.989 "ffdhe3072", 00:21:49.989 "ffdhe4096", 00:21:49.989 "ffdhe6144", 00:21:49.989 "ffdhe8192" 00:21:49.989 ] 00:21:49.989 } 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "method": "bdev_nvme_set_hotplug", 00:21:49.989 "params": { 00:21:49.989 "period_us": 100000, 00:21:49.989 "enable": false 00:21:49.989 } 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "method": "bdev_malloc_create", 00:21:49.989 "params": { 00:21:49.989 "name": "malloc0", 00:21:49.989 "num_blocks": 8192, 00:21:49.989 "block_size": 4096, 00:21:49.989 "physical_block_size": 4096, 00:21:49.989 "uuid": "f479dbed-745d-4cd9-a8b5-39c29ae4fc66", 00:21:49.989 "optimal_io_boundary": 0, 00:21:49.989 "md_size": 0, 00:21:49.989 "dif_type": 0, 00:21:49.989 "dif_is_head_of_md": false, 00:21:49.989 "dif_pi_format": 0 00:21:49.989 } 00:21:49.989 }, 00:21:49.989 { 00:21:49.989 "method": "bdev_wait_for_examine" 00:21:49.989 } 00:21:49.989 ] 00:21:49.989 }, 00:21:49.989 { 00:21:49.990 "subsystem": "nbd", 00:21:49.990 "config": [] 00:21:49.990 }, 00:21:49.990 { 00:21:49.990 "subsystem": "scheduler", 00:21:49.990 "config": [ 00:21:49.990 { 00:21:49.990 "method": "framework_set_scheduler", 00:21:49.990 "params": { 00:21:49.990 "name": "static" 00:21:49.990 } 00:21:49.990 } 00:21:49.990 ] 00:21:49.990 }, 00:21:49.990 { 00:21:49.990 "subsystem": "nvmf", 00:21:49.990 "config": [ 00:21:49.990 { 00:21:49.990 "method": "nvmf_set_config", 00:21:49.990 "params": { 00:21:49.990 "discovery_filter": "match_any", 00:21:49.990 "admin_cmd_passthru": { 00:21:49.990 "identify_ctrlr": false 00:21:49.990 } 00:21:49.990 } 00:21:49.990 }, 00:21:49.990 { 00:21:49.990 "method": "nvmf_set_max_subsystems", 00:21:49.990 "params": { 00:21:49.990 "max_subsystems": 1024 00:21:49.990 } 00:21:49.990 }, 00:21:49.990 { 00:21:49.990 "method": "nvmf_set_crdt", 00:21:49.990 "params": { 00:21:49.990 "crdt1": 0, 00:21:49.990 "crdt2": 0, 00:21:49.990 "crdt3": 0 00:21:49.990 } 00:21:49.990 }, 00:21:49.990 { 00:21:49.990 "method": "nvmf_create_transport", 00:21:49.990 "params": { 00:21:49.990 "trtype": "TCP", 00:21:49.990 "max_queue_depth": 128, 00:21:49.990 "max_io_qpairs_per_ctrlr": 127, 00:21:49.990 "in_capsule_data_size": 4096, 00:21:49.990 "max_io_size": 131072, 00:21:49.990 "io_unit_size": 131072, 00:21:49.990 "max_aq_depth": 128, 00:21:49.990 "num_shared_buffers": 511, 00:21:49.990 "buf_cache_size": 4294967295, 00:21:49.990 "dif_insert_or_strip": false, 00:21:49.990 "zcopy": false, 00:21:49.990 "c2h_success": false, 00:21:49.990 "sock_priority": 0, 00:21:49.990 "abort_timeout_sec": 1, 00:21:49.990 "ack_timeout": 0, 00:21:49.990 "data_wr_pool_size": 0 00:21:49.990 } 00:21:49.990 }, 00:21:49.990 { 00:21:49.990 "method": "nvmf_create_subsystem", 00:21:49.990 "params": { 00:21:49.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.990 "allow_any_host": false, 00:21:49.990 "serial_number": "00000000000000000000", 00:21:49.990 "model_number": "SPDK bdev Controller", 00:21:49.990 "max_namespaces": 32, 00:21:49.990 "min_cntlid": 1, 00:21:49.990 "max_cntlid": 65519, 00:21:49.990 "ana_reporting": false 00:21:49.990 } 00:21:49.990 }, 00:21:49.990 { 00:21:49.990 "method": "nvmf_subsystem_add_host", 00:21:49.990 "params": { 00:21:49.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.990 "host": "nqn.2016-06.io.spdk:host1", 00:21:49.990 "psk": "key0" 00:21:49.990 } 00:21:49.990 }, 00:21:49.990 { 00:21:49.990 "method": "nvmf_subsystem_add_ns", 00:21:49.990 "params": { 00:21:49.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.990 "namespace": { 00:21:49.990 "nsid": 1, 00:21:49.990 "bdev_name": "malloc0", 00:21:49.990 "nguid": "F479DBED745D4CD9A8B539C29AE4FC66", 00:21:49.990 "uuid": "f479dbed-745d-4cd9-a8b5-39c29ae4fc66", 00:21:49.990 "no_auto_visible": false 00:21:49.990 } 00:21:49.990 } 00:21:49.990 }, 00:21:49.990 { 00:21:49.990 "method": "nvmf_subsystem_add_listener", 00:21:49.990 "params": { 00:21:49.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.990 "listen_address": { 00:21:49.990 "trtype": "TCP", 00:21:49.990 "adrfam": "IPv4", 00:21:49.990 "traddr": "10.0.0.2", 00:21:49.990 "trsvcid": "4420" 00:21:49.990 }, 00:21:49.990 "secure_channel": false, 00:21:49.990 "sock_impl": "ssl" 00:21:49.990 } 00:21:49.990 } 00:21:49.990 ] 00:21:49.990 } 00:21:49.990 ] 00:21:49.990 }' 00:21:49.990 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:50.250 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:50.250 "subsystems": [ 00:21:50.250 { 00:21:50.250 "subsystem": "keyring", 00:21:50.250 "config": [ 00:21:50.250 { 00:21:50.250 "method": "keyring_file_add_key", 00:21:50.250 "params": { 00:21:50.250 "name": "key0", 00:21:50.250 "path": "/tmp/tmp.bT2ZLoSqIU" 00:21:50.250 } 00:21:50.250 } 00:21:50.250 ] 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "subsystem": "iobuf", 00:21:50.250 "config": [ 00:21:50.250 { 00:21:50.250 "method": "iobuf_set_options", 00:21:50.250 "params": { 00:21:50.250 "small_pool_count": 8192, 00:21:50.250 "large_pool_count": 1024, 00:21:50.250 "small_bufsize": 8192, 00:21:50.250 "large_bufsize": 135168 00:21:50.250 } 00:21:50.250 } 00:21:50.250 ] 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "subsystem": "sock", 00:21:50.250 "config": [ 00:21:50.250 { 00:21:50.250 "method": "sock_set_default_impl", 00:21:50.250 "params": { 00:21:50.250 "impl_name": "posix" 00:21:50.250 } 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "method": "sock_impl_set_options", 00:21:50.250 "params": { 00:21:50.250 "impl_name": "ssl", 00:21:50.250 "recv_buf_size": 4096, 00:21:50.250 "send_buf_size": 4096, 00:21:50.250 "enable_recv_pipe": true, 00:21:50.250 "enable_quickack": false, 00:21:50.250 "enable_placement_id": 0, 00:21:50.250 "enable_zerocopy_send_server": true, 00:21:50.250 "enable_zerocopy_send_client": false, 00:21:50.250 "zerocopy_threshold": 0, 00:21:50.250 "tls_version": 0, 00:21:50.250 "enable_ktls": false 00:21:50.250 } 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "method": "sock_impl_set_options", 00:21:50.250 "params": { 00:21:50.250 "impl_name": "posix", 00:21:50.250 "recv_buf_size": 2097152, 00:21:50.250 "send_buf_size": 2097152, 00:21:50.250 "enable_recv_pipe": true, 00:21:50.250 "enable_quickack": false, 00:21:50.250 "enable_placement_id": 0, 00:21:50.250 "enable_zerocopy_send_server": true, 00:21:50.250 "enable_zerocopy_send_client": false, 00:21:50.250 "zerocopy_threshold": 0, 00:21:50.250 "tls_version": 0, 00:21:50.250 "enable_ktls": false 00:21:50.250 } 00:21:50.250 } 00:21:50.250 ] 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "subsystem": "vmd", 00:21:50.250 "config": [] 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "subsystem": "accel", 00:21:50.250 "config": [ 00:21:50.250 { 00:21:50.250 "method": "accel_set_options", 00:21:50.250 "params": { 00:21:50.250 "small_cache_size": 128, 00:21:50.250 "large_cache_size": 16, 00:21:50.250 "task_count": 2048, 00:21:50.250 "sequence_count": 2048, 00:21:50.250 "buf_count": 2048 00:21:50.250 } 00:21:50.250 } 00:21:50.250 ] 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "subsystem": "bdev", 00:21:50.250 "config": [ 00:21:50.250 { 00:21:50.250 "method": "bdev_set_options", 00:21:50.250 "params": { 00:21:50.250 "bdev_io_pool_size": 65535, 00:21:50.250 "bdev_io_cache_size": 256, 00:21:50.250 "bdev_auto_examine": true, 00:21:50.250 "iobuf_small_cache_size": 128, 00:21:50.250 "iobuf_large_cache_size": 16 00:21:50.250 } 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "method": "bdev_raid_set_options", 00:21:50.250 "params": { 00:21:50.250 "process_window_size_kb": 1024, 00:21:50.250 "process_max_bandwidth_mb_sec": 0 00:21:50.250 } 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "method": "bdev_iscsi_set_options", 00:21:50.250 "params": { 00:21:50.250 "timeout_sec": 30 00:21:50.250 } 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "method": "bdev_nvme_set_options", 00:21:50.250 "params": { 00:21:50.250 "action_on_timeout": "none", 00:21:50.250 "timeout_us": 0, 00:21:50.250 "timeout_admin_us": 0, 00:21:50.250 "keep_alive_timeout_ms": 10000, 00:21:50.250 "arbitration_burst": 0, 00:21:50.250 "low_priority_weight": 0, 00:21:50.250 "medium_priority_weight": 0, 00:21:50.250 "high_priority_weight": 0, 00:21:50.250 "nvme_adminq_poll_period_us": 10000, 00:21:50.250 "nvme_ioq_poll_period_us": 0, 00:21:50.250 "io_queue_requests": 512, 00:21:50.250 "delay_cmd_submit": true, 00:21:50.250 "transport_retry_count": 4, 00:21:50.250 "bdev_retry_count": 3, 00:21:50.250 "transport_ack_timeout": 0, 00:21:50.250 "ctrlr_loss_timeout_sec": 0, 00:21:50.250 "reconnect_delay_sec": 0, 00:21:50.250 "fast_io_fail_timeout_sec": 0, 00:21:50.250 "disable_auto_failback": false, 00:21:50.250 "generate_uuids": false, 00:21:50.250 "transport_tos": 0, 00:21:50.250 "nvme_error_stat": false, 00:21:50.250 "rdma_srq_size": 0, 00:21:50.250 "io_path_stat": false, 00:21:50.250 "allow_accel_sequence": false, 00:21:50.250 "rdma_max_cq_size": 0, 00:21:50.250 "rdma_cm_event_timeout_ms": 0, 00:21:50.250 "dhchap_digests": [ 00:21:50.250 "sha256", 00:21:50.250 "sha384", 00:21:50.250 "sha512" 00:21:50.250 ], 00:21:50.250 "dhchap_dhgroups": [ 00:21:50.250 "null", 00:21:50.250 "ffdhe2048", 00:21:50.250 "ffdhe3072", 00:21:50.250 "ffdhe4096", 00:21:50.250 "ffdhe6144", 00:21:50.250 "ffdhe8192" 00:21:50.250 ] 00:21:50.250 } 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "method": "bdev_nvme_attach_controller", 00:21:50.250 "params": { 00:21:50.250 "name": "nvme0", 00:21:50.250 "trtype": "TCP", 00:21:50.250 "adrfam": "IPv4", 00:21:50.250 "traddr": "10.0.0.2", 00:21:50.250 "trsvcid": "4420", 00:21:50.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.250 "prchk_reftag": false, 00:21:50.250 "prchk_guard": false, 00:21:50.250 "ctrlr_loss_timeout_sec": 0, 00:21:50.250 "reconnect_delay_sec": 0, 00:21:50.250 "fast_io_fail_timeout_sec": 0, 00:21:50.250 "psk": "key0", 00:21:50.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.250 "hdgst": false, 00:21:50.250 "ddgst": false 00:21:50.250 } 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "method": "bdev_nvme_set_hotplug", 00:21:50.250 "params": { 00:21:50.250 "period_us": 100000, 00:21:50.250 "enable": false 00:21:50.250 } 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "method": "bdev_enable_histogram", 00:21:50.250 "params": { 00:21:50.250 "name": "nvme0n1", 00:21:50.250 "enable": true 00:21:50.250 } 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "method": "bdev_wait_for_examine" 00:21:50.250 } 00:21:50.250 ] 00:21:50.250 }, 00:21:50.250 { 00:21:50.250 "subsystem": "nbd", 00:21:50.250 "config": [] 00:21:50.250 } 00:21:50.250 ] 00:21:50.250 }' 00:21:50.250 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2549784 00:21:50.250 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2549784 ']' 00:21:50.250 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2549784 00:21:50.250 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:50.250 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.250 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2549784 00:21:50.251 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:50.251 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:50.251 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2549784' 00:21:50.251 killing process with pid 2549784 00:21:50.251 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2549784 00:21:50.251 Received shutdown signal, test time was about 1.000000 seconds 00:21:50.251 00:21:50.251 Latency(us) 00:21:50.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.251 =================================================================================================================== 00:21:50.251 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.251 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2549784 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2549764 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2549764 ']' 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2549764 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2549764 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2549764' 00:21:50.510 killing process with pid 2549764 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2549764 00:21:50.510 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2549764 00:21:50.769 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:50.769 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:50.769 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:50.769 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:50.769 "subsystems": [ 00:21:50.769 { 00:21:50.769 "subsystem": "keyring", 00:21:50.769 "config": [ 00:21:50.769 { 00:21:50.769 "method": "keyring_file_add_key", 00:21:50.769 "params": { 00:21:50.769 "name": "key0", 00:21:50.769 "path": "/tmp/tmp.bT2ZLoSqIU" 00:21:50.769 } 00:21:50.769 } 00:21:50.769 ] 00:21:50.769 }, 00:21:50.769 { 00:21:50.769 "subsystem": "iobuf", 00:21:50.769 "config": [ 00:21:50.769 { 00:21:50.769 "method": "iobuf_set_options", 00:21:50.769 "params": { 00:21:50.769 "small_pool_count": 8192, 00:21:50.769 "large_pool_count": 1024, 00:21:50.769 "small_bufsize": 8192, 00:21:50.769 "large_bufsize": 135168 00:21:50.769 } 00:21:50.769 } 00:21:50.769 ] 00:21:50.769 }, 00:21:50.769 { 00:21:50.769 "subsystem": "sock", 00:21:50.769 "config": [ 00:21:50.769 { 00:21:50.769 "method": "sock_set_default_impl", 00:21:50.769 "params": { 00:21:50.769 "impl_name": "posix" 00:21:50.769 } 00:21:50.769 }, 00:21:50.769 { 00:21:50.769 "method": "sock_impl_set_options", 00:21:50.769 "params": { 00:21:50.769 "impl_name": "ssl", 00:21:50.769 "recv_buf_size": 4096, 00:21:50.769 "send_buf_size": 4096, 00:21:50.769 "enable_recv_pipe": true, 00:21:50.769 "enable_quickack": false, 00:21:50.769 "enable_placement_id": 0, 00:21:50.769 "enable_zerocopy_send_server": true, 00:21:50.769 "enable_zerocopy_send_client": false, 00:21:50.769 "zerocopy_threshold": 0, 00:21:50.769 "tls_version": 0, 00:21:50.769 "enable_ktls": false 00:21:50.769 } 00:21:50.769 }, 00:21:50.769 { 00:21:50.769 "method": "sock_impl_set_options", 00:21:50.769 "params": { 00:21:50.769 "impl_name": "posix", 00:21:50.769 "recv_buf_size": 2097152, 00:21:50.769 "send_buf_size": 2097152, 00:21:50.769 "enable_recv_pipe": true, 00:21:50.769 "enable_quickack": false, 00:21:50.769 "enable_placement_id": 0, 00:21:50.769 "enable_zerocopy_send_server": true, 00:21:50.769 "enable_zerocopy_send_client": false, 00:21:50.769 "zerocopy_threshold": 0, 00:21:50.769 "tls_version": 0, 00:21:50.769 "enable_ktls": false 00:21:50.769 } 00:21:50.769 } 00:21:50.769 ] 00:21:50.769 }, 00:21:50.769 { 00:21:50.769 "subsystem": "vmd", 00:21:50.769 "config": [] 00:21:50.769 }, 00:21:50.769 { 00:21:50.769 "subsystem": "accel", 00:21:50.769 "config": [ 00:21:50.769 { 00:21:50.769 "method": "accel_set_options", 00:21:50.769 "params": { 00:21:50.769 "small_cache_size": 128, 00:21:50.769 "large_cache_size": 16, 00:21:50.769 "task_count": 2048, 00:21:50.769 "sequence_count": 2048, 00:21:50.769 "buf_count": 2048 00:21:50.769 } 00:21:50.769 } 00:21:50.769 ] 00:21:50.769 }, 00:21:50.769 { 00:21:50.769 "subsystem": "bdev", 00:21:50.769 "config": [ 00:21:50.770 { 00:21:50.770 "method": "bdev_set_options", 00:21:50.770 "params": { 00:21:50.770 "bdev_io_pool_size": 65535, 00:21:50.770 "bdev_io_cache_size": 256, 00:21:50.770 "bdev_auto_examine": true, 00:21:50.770 "iobuf_small_cache_size": 128, 00:21:50.770 "iobuf_large_cache_size": 16 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "bdev_raid_set_options", 00:21:50.770 "params": { 00:21:50.770 "process_window_size_kb": 1024, 00:21:50.770 "process_max_bandwidth_mb_sec": 0 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "bdev_iscsi_set_options", 00:21:50.770 "params": { 00:21:50.770 "timeout_sec": 30 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "bdev_nvme_set_options", 00:21:50.770 "params": { 00:21:50.770 "action_on_timeout": "none", 00:21:50.770 "timeout_us": 0, 00:21:50.770 "timeout_admin_us": 0, 00:21:50.770 "keep_alive_timeout_ms": 10000, 00:21:50.770 "arbitration_burst": 0, 00:21:50.770 "low_priority_weight": 0, 00:21:50.770 "medium_priority_weight": 0, 00:21:50.770 "high_priority_weight": 0, 00:21:50.770 "nvme_adminq_poll_period_us": 10000, 00:21:50.770 "nvme_ioq_poll_period_us": 0, 00:21:50.770 "io_queue_requests": 0, 00:21:50.770 "delay_cmd_submit": true, 00:21:50.770 "transport_retry_count": 4, 00:21:50.770 "bdev_retry_count": 3, 00:21:50.770 "transport_ack_timeout": 0, 00:21:50.770 "ctrlr_loss_timeout_sec": 0, 00:21:50.770 "reconnect_delay_sec": 0, 00:21:50.770 "fast_io_fail_timeout_sec": 0, 00:21:50.770 "disable_auto_failback": false, 00:21:50.770 "generate_uuids": false, 00:21:50.770 "transport_tos": 0, 00:21:50.770 "nvme_error_stat": false, 00:21:50.770 "rdma_srq_size": 0, 00:21:50.770 "io_path_stat": false, 00:21:50.770 "allow_accel_sequence": false, 00:21:50.770 "rdma_max_cq_size": 0, 00:21:50.770 "rdma_cm_event_timeout_ms": 0, 00:21:50.770 "dhchap_digests": [ 00:21:50.770 "sha256", 00:21:50.770 "sha384", 00:21:50.770 "sha512" 00:21:50.770 ], 00:21:50.770 "dhchap_dhgroups": [ 00:21:50.770 "null", 00:21:50.770 "ffdhe2048", 00:21:50.770 "ffdhe3072", 00:21:50.770 "ffdhe4096", 00:21:50.770 "ffdhe6144", 00:21:50.770 "ffdhe8192" 00:21:50.770 ] 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "bdev_nvme_set_hotplug", 00:21:50.770 "params": { 00:21:50.770 "period_us": 100000, 00:21:50.770 "enable": false 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "bdev_malloc_create", 00:21:50.770 "params": { 00:21:50.770 "name": "malloc0", 00:21:50.770 "num_blocks": 8192, 00:21:50.770 "block_size": 4096, 00:21:50.770 "physical_block_size": 4096, 00:21:50.770 "uuid": "f479dbed-745d-4cd9-a8b5-39c29ae4fc66", 00:21:50.770 "optimal_io_boundary": 0, 00:21:50.770 "md_size": 0, 00:21:50.770 "dif_type": 0, 00:21:50.770 "dif_is_head_of_md": false, 00:21:50.770 "dif_pi_format": 0 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "bdev_wait_for_examine" 00:21:50.770 } 00:21:50.770 ] 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "subsystem": "nbd", 00:21:50.770 "config": [] 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "subsystem": "scheduler", 00:21:50.770 "config": [ 00:21:50.770 { 00:21:50.770 "method": "framework_set_scheduler", 00:21:50.770 "params": { 00:21:50.770 "name": "static" 00:21:50.770 } 00:21:50.770 } 00:21:50.770 ] 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "subsystem": "nvmf", 00:21:50.770 "config": [ 00:21:50.770 { 00:21:50.770 "method": "nvmf_set_config", 00:21:50.770 "params": { 00:21:50.770 "discovery_filter": "match_any", 00:21:50.770 "admin_cmd_passthru": { 00:21:50.770 "identify_ctrlr": false 00:21:50.770 } 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "nvmf_set_max_subsystems", 00:21:50.770 "params": { 00:21:50.770 "max_subsystems": 1024 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "nvmf_set_crdt", 00:21:50.770 "params": { 00:21:50.770 "crdt1": 0, 00:21:50.770 "crdt2": 0, 00:21:50.770 "crdt3": 0 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "nvmf_create_transport", 00:21:50.770 "params": { 00:21:50.770 "trtype": "TCP", 00:21:50.770 "max_queue_depth": 128, 00:21:50.770 "max_io_qpairs_per_ctrlr": 127, 00:21:50.770 "in_capsule_data_size": 4096, 00:21:50.770 "max_io_size": 131072, 00:21:50.770 "io_unit_size": 131072, 00:21:50.770 "max_aq_depth": 128, 00:21:50.770 "num_shared_buffers": 511, 00:21:50.770 "buf_cache_size": 4294967295, 00:21:50.770 "dif_insert_or_strip": false, 00:21:50.770 "zcopy": false, 00:21:50.770 "c2h_success": false, 00:21:50.770 "sock_priority": 0, 00:21:50.770 "abort_timeout_sec": 1, 00:21:50.770 "ack_timeout": 0, 00:21:50.770 "data_wr_pool_size": 0 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "nvmf_create_subsystem", 00:21:50.770 "params": { 00:21:50.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.770 "allow_any_host": false, 00:21:50.770 "serial_number": "00000000000000000000", 00:21:50.770 "model_number": "SPDK bdev Controller", 00:21:50.770 "max_namespaces": 32, 00:21:50.770 "min_cntlid": 1, 00:21:50.770 "max_cntlid": 65519, 00:21:50.770 "ana_reporting": false 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "nvmf_subsystem_add_host", 00:21:50.770 "params": { 00:21:50.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.770 "host": "nqn.2016-06.io.spdk:host1", 00:21:50.770 "psk": "key0" 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "nvmf_subsystem_add_ns", 00:21:50.770 "params": { 00:21:50.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.770 "namespace": { 00:21:50.770 "nsid": 1, 00:21:50.770 "bdev_name": "malloc0", 00:21:50.770 "nguid": "F479DBED745D4CD9A8B539C29AE4FC66", 00:21:50.770 "uuid": "f479dbed-745d-4cd9-a8b5-39c29ae4fc66", 00:21:50.770 "no_auto_visible": false 00:21:50.770 } 00:21:50.770 } 00:21:50.770 }, 00:21:50.770 { 00:21:50.770 "method": "nvmf_subsystem_add_listener", 00:21:50.770 "params": { 00:21:50.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.770 "listen_address": { 00:21:50.770 "trtype": "TCP", 00:21:50.770 "adrfam": "IPv4", 00:21:50.770 "traddr": "10.0.0.2", 00:21:50.770 "trsvcid": "4420" 00:21:50.770 }, 00:21:50.770 "secure_channel": false, 00:21:50.770 "sock_impl": "ssl" 00:21:50.770 } 00:21:50.770 } 00:21:50.770 ] 00:21:50.770 } 00:21:50.770 ] 00:21:50.770 }' 00:21:50.770 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.770 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2550394 00:21:50.770 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:50.770 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2550394 00:21:50.770 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2550394 ']' 00:21:50.770 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.770 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.770 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.770 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.770 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.770 [2024-07-24 18:58:35.691065] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:50.770 [2024-07-24 18:58:35.691122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.770 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.770 [2024-07-24 18:58:35.777855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.029 [2024-07-24 18:58:35.867424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.029 [2024-07-24 18:58:35.867470] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.029 [2024-07-24 18:58:35.867484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.029 [2024-07-24 18:58:35.867492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.029 [2024-07-24 18:58:35.867500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.029 [2024-07-24 18:58:35.867556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.288 [2024-07-24 18:58:36.086782] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.288 [2024-07-24 18:58:36.129038] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.288 [2024-07-24 18:58:36.129246] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2550616 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2550616 /var/tmp/bdevperf.sock 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2550616 ']' 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.856 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:51.856 "subsystems": [ 00:21:51.856 { 00:21:51.856 "subsystem": "keyring", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "keyring_file_add_key", 00:21:51.856 "params": { 00:21:51.856 "name": "key0", 00:21:51.856 "path": "/tmp/tmp.bT2ZLoSqIU" 00:21:51.856 } 00:21:51.856 } 00:21:51.856 ] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "iobuf", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "iobuf_set_options", 00:21:51.856 "params": { 00:21:51.856 "small_pool_count": 8192, 00:21:51.856 "large_pool_count": 1024, 00:21:51.856 "small_bufsize": 8192, 00:21:51.856 "large_bufsize": 135168 00:21:51.856 } 00:21:51.856 } 00:21:51.856 ] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "sock", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "sock_set_default_impl", 00:21:51.856 "params": { 00:21:51.856 "impl_name": "posix" 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "sock_impl_set_options", 00:21:51.856 "params": { 00:21:51.856 "impl_name": "ssl", 00:21:51.856 "recv_buf_size": 4096, 00:21:51.856 "send_buf_size": 4096, 00:21:51.856 "enable_recv_pipe": true, 00:21:51.856 "enable_quickack": false, 00:21:51.856 "enable_placement_id": 0, 00:21:51.856 "enable_zerocopy_send_server": true, 00:21:51.856 "enable_zerocopy_send_client": false, 00:21:51.856 "zerocopy_threshold": 0, 00:21:51.856 "tls_version": 0, 00:21:51.856 "enable_ktls": false 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "sock_impl_set_options", 00:21:51.856 "params": { 00:21:51.856 "impl_name": "posix", 00:21:51.856 "recv_buf_size": 2097152, 00:21:51.856 "send_buf_size": 2097152, 00:21:51.856 "enable_recv_pipe": true, 00:21:51.856 "enable_quickack": false, 00:21:51.856 "enable_placement_id": 0, 00:21:51.856 "enable_zerocopy_send_server": true, 00:21:51.856 "enable_zerocopy_send_client": false, 00:21:51.856 "zerocopy_threshold": 0, 00:21:51.856 "tls_version": 0, 00:21:51.856 "enable_ktls": false 00:21:51.856 } 00:21:51.856 } 00:21:51.856 ] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "vmd", 00:21:51.856 "config": [] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "accel", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "accel_set_options", 00:21:51.856 "params": { 00:21:51.856 "small_cache_size": 128, 00:21:51.856 "large_cache_size": 16, 00:21:51.856 "task_count": 2048, 00:21:51.856 "sequence_count": 2048, 00:21:51.856 "buf_count": 2048 00:21:51.856 } 00:21:51.856 } 00:21:51.856 ] 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "subsystem": "bdev", 00:21:51.856 "config": [ 00:21:51.856 { 00:21:51.856 "method": "bdev_set_options", 00:21:51.856 "params": { 00:21:51.856 "bdev_io_pool_size": 65535, 00:21:51.856 "bdev_io_cache_size": 256, 00:21:51.856 "bdev_auto_examine": true, 00:21:51.856 "iobuf_small_cache_size": 128, 00:21:51.856 "iobuf_large_cache_size": 16 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "bdev_raid_set_options", 00:21:51.856 "params": { 00:21:51.856 "process_window_size_kb": 1024, 00:21:51.856 "process_max_bandwidth_mb_sec": 0 00:21:51.856 } 00:21:51.856 }, 00:21:51.856 { 00:21:51.856 "method": "bdev_iscsi_set_options", 00:21:51.856 "params": { 00:21:51.857 "timeout_sec": 30 00:21:51.857 } 00:21:51.857 }, 00:21:51.857 { 00:21:51.857 "method": "bdev_nvme_set_options", 00:21:51.857 "params": { 00:21:51.857 "action_on_timeout": "none", 00:21:51.857 "timeout_us": 0, 00:21:51.857 "timeout_admin_us": 0, 00:21:51.857 "keep_alive_timeout_ms": 10000, 00:21:51.857 "arbitration_burst": 0, 00:21:51.857 "low_priority_weight": 0, 00:21:51.857 "medium_priority_weight": 0, 00:21:51.857 "high_priority_weight": 0, 00:21:51.857 "nvme_adminq_poll_period_us": 10000, 00:21:51.857 "nvme_ioq_poll_period_us": 0, 00:21:51.857 "io_queue_requests": 512, 00:21:51.857 "delay_cmd_submit": true, 00:21:51.857 "transport_retry_count": 4, 00:21:51.857 "bdev_retry_count": 3, 00:21:51.857 "transport_ack_timeout": 0, 00:21:51.857 "ctrlr_loss_timeout_sec": 0, 00:21:51.857 "reconnect_delay_sec": 0, 00:21:51.857 "fast_io_fail_timeout_sec": 0, 00:21:51.857 "disable_auto_failback": false, 00:21:51.857 "generate_uuids": false, 00:21:51.857 "transport_tos": 0, 00:21:51.857 "nvme_error_stat": false, 00:21:51.857 "rdma_srq_size": 0, 00:21:51.857 "io_path_stat": false, 00:21:51.857 "allow_accel_sequence": false, 00:21:51.857 "rdma_max_cq_size": 0, 00:21:51.857 "rdma_cm_event_timeout_ms": 0, 00:21:51.857 "dhchap_digests": [ 00:21:51.857 "sha256", 00:21:51.857 "sha384", 00:21:51.857 "sha512" 00:21:51.857 ], 00:21:51.857 "dhchap_dhgroups": [ 00:21:51.857 "null", 00:21:51.857 "ffdhe2048", 00:21:51.857 "ffdhe3072", 00:21:51.857 "ffdhe4096", 00:21:51.857 "ffdhe6144", 00:21:51.857 "ffdhe8192" 00:21:51.857 ] 00:21:51.857 } 00:21:51.857 }, 00:21:51.857 { 00:21:51.857 "method": "bdev_nvme_attach_controller", 00:21:51.857 "params": { 00:21:51.857 "name": "nvme0", 00:21:51.857 "trtype": "TCP", 00:21:51.857 "adrfam": "IPv4", 00:21:51.857 "traddr": "10.0.0.2", 00:21:51.857 "trsvcid": "4420", 00:21:51.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.857 "prchk_reftag": false, 00:21:51.857 "prchk_guard": false, 00:21:51.857 "ctrlr_loss_timeout_sec": 0, 00:21:51.857 "reconnect_delay_sec": 0, 00:21:51.857 "fast_io_fail_timeout_sec": 0, 00:21:51.857 "psk": "key0", 00:21:51.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.857 "hdgst": false, 00:21:51.857 "ddgst": false 00:21:51.857 } 00:21:51.857 }, 00:21:51.857 { 00:21:51.857 "method": "bdev_nvme_set_hotplug", 00:21:51.857 "params": { 00:21:51.857 "period_us": 100000, 00:21:51.857 "enable": false 00:21:51.857 } 00:21:51.857 }, 00:21:51.857 { 00:21:51.857 "method": "bdev_enable_histogram", 00:21:51.857 "params": { 00:21:51.857 "name": "nvme0n1", 00:21:51.857 "enable": true 00:21:51.857 } 00:21:51.857 }, 00:21:51.857 { 00:21:51.857 "method": "bdev_wait_for_examine" 00:21:51.857 } 00:21:51.857 ] 00:21:51.857 }, 00:21:51.857 { 00:21:51.857 "subsystem": "nbd", 00:21:51.857 "config": [] 00:21:51.857 } 00:21:51.857 ] 00:21:51.857 }' 00:21:51.857 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.857 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.857 [2024-07-24 18:58:36.721457] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:21:51.857 [2024-07-24 18:58:36.721517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550616 ] 00:21:51.857 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.857 [2024-07-24 18:58:36.803547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.116 [2024-07-24 18:58:36.906443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.116 [2024-07-24 18:58:37.070572] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.684 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.684 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:52.684 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:52.684 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:52.943 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.943 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:53.201 Running I/O for 1 seconds... 00:21:54.137 00:21:54.137 Latency(us) 00:21:54.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.137 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:54.137 Verification LBA range: start 0x0 length 0x2000 00:21:54.137 nvme0n1 : 1.03 3574.57 13.96 0.00 0.00 35348.36 8996.31 53620.36 00:21:54.137 =================================================================================================================== 00:21:54.137 Total : 3574.57 13.96 0.00 0.00 35348.36 8996.31 53620.36 00:21:54.137 0 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:54.137 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:54.137 nvmf_trace.0 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2550616 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2550616 ']' 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2550616 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2550616 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2550616' 00:21:54.396 killing process with pid 2550616 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2550616 00:21:54.396 Received shutdown signal, test time was about 1.000000 seconds 00:21:54.396 00:21:54.396 Latency(us) 00:21:54.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.396 =================================================================================================================== 00:21:54.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.396 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2550616 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.656 rmmod nvme_tcp 00:21:54.656 rmmod nvme_fabrics 00:21:54.656 rmmod nvme_keyring 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2550394 ']' 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2550394 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2550394 ']' 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2550394 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2550394 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2550394' 00:21:54.656 killing process with pid 2550394 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2550394 00:21:54.656 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2550394 00:21:54.915 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:54.915 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:54.915 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:54.915 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.915 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.915 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.915 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.915 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.452 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.452 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1jfW9bJNNq /tmp/tmp.TvZj8QgV66 /tmp/tmp.bT2ZLoSqIU 00:21:57.452 00:21:57.452 real 1m33.713s 00:21:57.452 user 2m32.846s 00:21:57.452 sys 0m27.582s 00:21:57.452 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:57.452 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.452 ************************************ 00:21:57.452 END TEST nvmf_tls 00:21:57.452 ************************************ 00:21:57.452 18:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:57.452 18:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:57.452 18:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.452 18:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:57.452 ************************************ 00:21:57.452 START TEST nvmf_fips 00:21:57.452 ************************************ 00:21:57.452 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:57.452 * Looking for test storage... 00:21:57.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.452 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:57.453 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:57.453 Error setting digest 00:21:57.454 00F27F66117F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:57.454 00F27F66117F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.454 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:02.762 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:02.762 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:02.762 Found net devices under 0000:af:00.0: cvl_0_0 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.762 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:02.763 Found net devices under 0000:af:00.1: cvl_0_1 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.763 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.022 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.023 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.023 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:03.023 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.023 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.023 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.023 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:03.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:22:03.023 00:22:03.023 --- 10.0.0.2 ping statistics --- 00:22:03.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.023 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:03.023 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:22:03.023 00:22:03.023 --- 10.0.0.1 ping statistics --- 00:22:03.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.023 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:22:03.023 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.023 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:03.023 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:03.023 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.023 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:03.023 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:03.023 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.023 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:03.023 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2554834 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2554834 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2554834 ']' 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.282 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:03.282 [2024-07-24 18:58:48.131597] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:22:03.282 [2024-07-24 18:58:48.131665] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.282 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.282 [2024-07-24 18:58:48.219085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.540 [2024-07-24 18:58:48.323315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.540 [2024-07-24 18:58:48.323360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.540 [2024-07-24 18:58:48.323372] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.540 [2024-07-24 18:58:48.323383] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.540 [2024-07-24 18:58:48.323393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.540 [2024-07-24 18:58:48.323417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.108 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.108 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:04.108 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.108 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.108 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:04.108 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.108 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:04.108 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:04.108 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.109 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:04.109 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.109 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.109 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.109 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:04.368 [2024-07-24 18:58:49.315793] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.368 [2024-07-24 18:58:49.331780] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.368 [2024-07-24 18:58:49.332004] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.368 [2024-07-24 18:58:49.362473] tcp.c:3771:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:04.368 malloc0 00:22:04.626 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:04.626 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2555039 00:22:04.626 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2555039 /var/tmp/bdevperf.sock 00:22:04.626 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:04.626 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2555039 ']' 00:22:04.626 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.626 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.626 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.626 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.626 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:04.626 [2024-07-24 18:58:49.474918] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:22:04.626 [2024-07-24 18:58:49.474987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2555039 ] 00:22:04.626 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.626 [2024-07-24 18:58:49.588116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.886 [2024-07-24 18:58:49.739705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.469 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.469 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:05.469 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:05.727 [2024-07-24 18:58:50.554529] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.727 [2024-07-24 18:58:50.554725] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:05.727 TLSTESTn1 00:22:05.727 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:05.985 Running I/O for 10 seconds... 00:22:15.965 00:22:15.965 Latency(us) 00:22:15.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.965 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:15.965 Verification LBA range: start 0x0 length 0x2000 00:22:15.965 TLSTESTn1 : 10.03 2802.78 10.95 0.00 0.00 45537.65 12571.00 69110.69 00:22:15.965 =================================================================================================================== 00:22:15.965 Total : 2802.78 10.95 0.00 0.00 45537.65 12571.00 69110.69 00:22:15.965 0 00:22:15.965 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:15.965 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:15.965 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:15.965 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:15.965 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:15.965 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:15.965 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:15.965 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:15.965 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:15.965 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:15.965 nvmf_trace.0 00:22:16.223 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:16.223 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2555039 00:22:16.223 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2555039 ']' 00:22:16.223 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2555039 00:22:16.223 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:16.223 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.223 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2555039 00:22:16.223 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:16.223 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:16.224 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2555039' 00:22:16.224 killing process with pid 2555039 00:22:16.224 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2555039 00:22:16.224 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.224 00:22:16.224 Latency(us) 00:22:16.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.224 =================================================================================================================== 00:22:16.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:16.224 [2024-07-24 18:59:01.031611] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:16.224 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2555039 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.482 rmmod nvme_tcp 00:22:16.482 rmmod nvme_fabrics 00:22:16.482 rmmod nvme_keyring 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2554834 ']' 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2554834 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2554834 ']' 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2554834 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.482 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2554834 00:22:16.741 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:16.741 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:16.741 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2554834' 00:22:16.741 killing process with pid 2554834 00:22:16.741 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2554834 00:22:16.741 [2024-07-24 18:59:01.505598] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:16.741 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2554834 00:22:17.000 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:17.000 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:17.000 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:17.000 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:17.000 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:17.000 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.000 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.000 18:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.909 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:18.909 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:18.909 00:22:18.909 real 0m21.918s 00:22:18.909 user 0m25.030s 00:22:18.909 sys 0m8.487s 00:22:18.909 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:18.909 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:18.909 ************************************ 00:22:18.909 END TEST nvmf_fips 00:22:18.909 ************************************ 00:22:19.168 18:59:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:19.168 18:59:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:19.168 18:59:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:19.168 18:59:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:19.168 ************************************ 00:22:19.168 START TEST nvmf_control_msg_list 00:22:19.168 ************************************ 00:22:19.168 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:19.168 * Looking for test storage... 00:22:19.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.168 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.168 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:19.168 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.168 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.168 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.168 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.168 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # : 0 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@285 -- # xtrace_disable 00:22:19.169 18:59:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:25.760 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.760 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.760 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.760 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.760 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.760 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.760 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.760 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.760 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # e810=() 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # x722=() 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # mlx=() 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:25.761 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:25.761 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:25.761 Found net devices under 0000:af:00.0: cvl_0_0 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:25.761 Found net devices under 0000:af:00.1: cvl_0_1 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # is_hw=yes 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:25.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:22:25.761 00:22:25.761 --- 10.0.0.2 ping statistics --- 00:22:25.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.761 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:22:25.761 00:22:25.761 --- 10.0.0.1 ping statistics --- 00:22:25.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.761 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # return 0 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:25.761 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # nvmfpid=2560744 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # waitforlisten 2560744 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@829 -- # '[' -z 2560744 ']' 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.762 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:25.762 [2024-07-24 18:59:09.904584] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:22:25.762 [2024-07-24 18:59:09.904650] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.762 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.762 [2024-07-24 18:59:09.989271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.762 [2024-07-24 18:59:10.088449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.762 [2024-07-24 18:59:10.088493] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.762 [2024-07-24 18:59:10.088503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.762 [2024-07-24 18:59:10.088512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.762 [2024-07-24 18:59:10.088519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.762 [2024-07-24 18:59:10.088545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # return 0 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.021 [2024-07-24 18:59:10.909479] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.021 Malloc0 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.021 [2024-07-24 18:59:10.959943] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2561018 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2561019 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2561020 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2561018 00:22:26.021 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.021 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.021 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.021 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.280 [2024-07-24 18:59:11.062685] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:26.280 [2024-07-24 18:59:11.092206] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:26.280 [2024-07-24 18:59:11.106701] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:27.654 Initializing NVMe Controllers 00:22:27.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:27.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:27.654 Initialization complete. Launching workers. 00:22:27.654 ======================================================== 00:22:27.654 Latency(us) 00:22:27.654 Device Information : IOPS MiB/s Average min max 00:22:27.654 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 16.00 0.06 63468.67 58846.55 63846.27 00:22:27.654 ======================================================== 00:22:27.654 Total : 16.00 0.06 63468.67 58846.55 63846.27 00:22:27.654 00:22:27.914 Initializing NVMe Controllers 00:22:27.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:27.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:27.914 Initialization complete. Launching workers. 00:22:27.914 ======================================================== 00:22:27.914 Latency(us) 00:22:27.914 Device Information : IOPS MiB/s Average min max 00:22:27.914 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 63.84 0.25 15877.03 12109.66 18282.16 00:22:27.914 ======================================================== 00:22:27.914 Total : 63.84 0.25 15877.03 12109.66 18282.16 00:22:27.914 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2561019 00:22:27.914 Initializing NVMe Controllers 00:22:27.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:27.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:27.914 Initialization complete. Launching workers. 00:22:27.914 ======================================================== 00:22:27.914 Latency(us) 00:22:27.914 Device Information : IOPS MiB/s Average min max 00:22:27.914 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 63.52 0.25 15742.42 9789.55 15964.97 00:22:27.914 ======================================================== 00:22:27.914 Total : 63.52 0.25 15742.42 9789.55 15964.97 00:22:27.914 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2561020 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # nvmftestfini 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@117 -- # sync 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@120 -- # set +e 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.914 rmmod nvme_tcp 00:22:27.914 rmmod nvme_fabrics 00:22:27.914 rmmod nvme_keyring 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set -e 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # return 0 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # '[' -n 2560744 ']' 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # killprocess 2560744 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@948 -- # '[' -z 2560744 ']' 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # kill -0 2560744 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@953 -- # uname 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2560744 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2560744' 00:22:27.914 killing process with pid 2560744 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@967 -- # kill 2560744 00:22:27.914 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # wait 2560744 00:22:28.173 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.173 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.173 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.173 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.173 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.173 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.173 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.173 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@1 -- # process_shm --id 0 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@806 -- # type=--id 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@807 -- # id=0 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:30.709 nvmf_trace.0 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@821 -- # return 0 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@1 -- # nvmftestfini 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@117 -- # sync 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@120 -- # set +e 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set -e 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # return 0 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # '[' -n 2560744 ']' 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # killprocess 2560744 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@948 -- # '[' -z 2560744 ']' 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # kill -0 2560744 00:22:30.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2560744) - No such process 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@975 -- # echo 'Process with pid 2560744 is not found' 00:22:30.709 Process with pid 2560744 is not found 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.709 00:22:30.709 real 0m11.332s 00:22:30.709 user 0m5.035s 00:22:30.709 sys 0m5.033s 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:30.709 ************************************ 00:22:30.709 END TEST nvmf_control_msg_list 00:22:30.709 ************************************ 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:30.709 ************************************ 00:22:30.709 START TEST nvmf_wait_for_buf 00:22:30.709 ************************************ 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:30.709 * Looking for test storage... 00:22:30.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.709 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # : 0 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.710 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@295 -- # net_devs=() 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # e810=() 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # local -ga e810 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # x722=() 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # local -ga x722 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # mlx=() 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:35.983 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:35.983 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:35.983 Found net devices under 0000:af:00.0: cvl_0_0 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:35.983 Found net devices under 0000:af:00.1: cvl_0_1 00:22:35.983 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.984 18:59:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:36.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:22:36.243 00:22:36.243 --- 10.0.0.2 ping statistics --- 00:22:36.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.243 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:22:36.243 00:22:36.243 --- 10.0.0.1 ping statistics --- 00:22:36.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.243 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # return 0 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:36.243 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # nvmfpid=2564878 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # waitforlisten 2564878 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@829 -- # '[' -z 2564878 ']' 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.502 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.502 [2024-07-24 18:59:21.315552] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:22:36.502 [2024-07-24 18:59:21.315613] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.502 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.502 [2024-07-24 18:59:21.400492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.502 [2024-07-24 18:59:21.489366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.502 [2024-07-24 18:59:21.489412] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.502 [2024-07-24 18:59:21.489423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.502 [2024-07-24 18:59:21.489431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.502 [2024-07-24 18:59:21.489438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.502 [2024-07-24 18:59:21.489459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.761 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.761 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # return 0 00:22:36.761 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.761 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:36.761 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.761 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.761 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:36.761 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 Malloc0 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 [2024-07-24 18:59:21.657005] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 [2024-07-24 18:59:21.681162] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.762 18:59:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:36.762 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.762 [2024-07-24 18:59:21.769687] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:38.666 Initializing NVMe Controllers 00:22:38.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:38.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:38.666 Initialization complete. Launching workers. 00:22:38.666 ======================================================== 00:22:38.666 Latency(us) 00:22:38.666 Device Information : IOPS MiB/s Average min max 00:22:38.666 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 82.00 10.25 51044.98 8029.27 191538.23 00:22:38.666 ======================================================== 00:22:38.666 Total : 82.00 10.25 51044.98 8029.27 191538.23 00:22:38.666 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1286 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1286 -eq 0 ]] 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # nvmftestfini 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@117 -- # sync 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@120 -- # set +e 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.666 rmmod nvme_tcp 00:22:38.666 rmmod nvme_fabrics 00:22:38.666 rmmod nvme_keyring 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set -e 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # return 0 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # '[' -n 2564878 ']' 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # killprocess 2564878 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@948 -- # '[' -z 2564878 ']' 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # kill -0 2564878 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@953 -- # uname 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2564878 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2564878' 00:22:38.666 killing process with pid 2564878 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@967 -- # kill 2564878 00:22:38.666 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # wait 2564878 00:22:38.924 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:38.924 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:38.924 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:38.924 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:38.924 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:38.924 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.924 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.924 18:59:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@1 -- # process_shm --id 0 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@806 -- # type=--id 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@807 -- # id=0 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:40.828 nvmf_trace.0 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@821 -- # return 0 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@1 -- # nvmftestfini 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.828 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@117 -- # sync 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@120 -- # set +e 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set -e 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # return 0 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # '[' -n 2564878 ']' 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # killprocess 2564878 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@948 -- # '[' -z 2564878 ']' 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # kill -0 2564878 00:22:41.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2564878) - No such process 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@975 -- # echo 'Process with pid 2564878 is not found' 00:22:41.099 Process with pid 2564878 is not found 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.099 00:22:41.099 real 0m10.504s 00:22:41.099 user 0m4.076s 00:22:41.099 sys 0m4.870s 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.099 ************************************ 00:22:41.099 END TEST nvmf_wait_for_buf 00:22:41.099 ************************************ 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.099 18:59:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:46.395 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.395 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:46.654 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:46.654 Found net devices under 0000:af:00.0: cvl_0_0 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:46.654 Found net devices under 0000:af:00.1: cvl_0_1 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:46.654 ************************************ 00:22:46.654 START TEST nvmf_perf_adq 00:22:46.654 ************************************ 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:46.654 * Looking for test storage... 00:22:46.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:46.654 18:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:53.236 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:53.237 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:53.237 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:53.237 Found net devices under 0000:af:00.0: cvl_0_0 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:53.237 Found net devices under 0000:af:00.1: cvl_0_1 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:53.237 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:53.495 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:56.027 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:01.300 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:01.300 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:01.300 Found net devices under 0000:af:00.0: cvl_0_0 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:01.300 Found net devices under 0000:af:00.1: cvl_0_1 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.300 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:23:01.301 00:23:01.301 --- 10.0.0.2 ping statistics --- 00:23:01.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.301 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:23:01.301 00:23:01.301 --- 10.0.0.1 ping statistics --- 00:23:01.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.301 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2573641 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2573641 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2573641 ']' 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.301 18:59:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.301 [2024-07-24 18:59:45.874812] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:23:01.301 [2024-07-24 18:59:45.874871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.301 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.301 [2024-07-24 18:59:45.962600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.301 [2024-07-24 18:59:46.052030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.301 [2024-07-24 18:59:46.052076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.301 [2024-07-24 18:59:46.052090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.301 [2024-07-24 18:59:46.052099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.301 [2024-07-24 18:59:46.052107] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.301 [2024-07-24 18:59:46.052169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.301 [2024-07-24 18:59:46.052283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.301 [2024-07-24 18:59:46.052394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.301 [2024-07-24 18:59:46.052394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.869 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.129 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:02.129 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:02.129 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.129 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.129 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.129 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:02.129 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.129 18:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.129 [2024-07-24 18:59:47.019762] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.129 Malloc1 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.129 [2024-07-24 18:59:47.083584] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2573803 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:02.129 18:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:02.129 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.666 18:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:04.666 18:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.666 18:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:04.666 18:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.666 18:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:04.666 "tick_rate": 2200000000, 00:23:04.666 "poll_groups": [ 00:23:04.666 { 00:23:04.666 "name": "nvmf_tgt_poll_group_000", 00:23:04.666 "admin_qpairs": 1, 00:23:04.666 "io_qpairs": 1, 00:23:04.666 "current_admin_qpairs": 1, 00:23:04.666 "current_io_qpairs": 1, 00:23:04.666 "pending_bdev_io": 0, 00:23:04.666 "completed_nvme_io": 11886, 00:23:04.666 "transports": [ 00:23:04.666 { 00:23:04.666 "trtype": "TCP" 00:23:04.666 } 00:23:04.666 ] 00:23:04.666 }, 00:23:04.666 { 00:23:04.666 "name": "nvmf_tgt_poll_group_001", 00:23:04.666 "admin_qpairs": 0, 00:23:04.666 "io_qpairs": 1, 00:23:04.666 "current_admin_qpairs": 0, 00:23:04.666 "current_io_qpairs": 1, 00:23:04.666 "pending_bdev_io": 0, 00:23:04.666 "completed_nvme_io": 8216, 00:23:04.666 "transports": [ 00:23:04.666 { 00:23:04.666 "trtype": "TCP" 00:23:04.666 } 00:23:04.666 ] 00:23:04.666 }, 00:23:04.666 { 00:23:04.666 "name": "nvmf_tgt_poll_group_002", 00:23:04.666 "admin_qpairs": 0, 00:23:04.666 "io_qpairs": 1, 00:23:04.666 "current_admin_qpairs": 0, 00:23:04.666 "current_io_qpairs": 1, 00:23:04.666 "pending_bdev_io": 0, 00:23:04.666 "completed_nvme_io": 8300, 00:23:04.666 "transports": [ 00:23:04.666 { 00:23:04.666 "trtype": "TCP" 00:23:04.666 } 00:23:04.666 ] 00:23:04.666 }, 00:23:04.666 { 00:23:04.666 "name": "nvmf_tgt_poll_group_003", 00:23:04.666 "admin_qpairs": 0, 00:23:04.666 "io_qpairs": 1, 00:23:04.666 "current_admin_qpairs": 0, 00:23:04.666 "current_io_qpairs": 1, 00:23:04.666 "pending_bdev_io": 0, 00:23:04.666 "completed_nvme_io": 12995, 00:23:04.666 "transports": [ 00:23:04.666 { 00:23:04.666 "trtype": "TCP" 00:23:04.666 } 00:23:04.666 ] 00:23:04.666 } 00:23:04.666 ] 00:23:04.666 }' 00:23:04.666 18:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:04.666 18:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:04.666 18:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:04.666 18:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:04.666 18:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2573803 00:23:12.820 Initializing NVMe Controllers 00:23:12.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:12.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:12.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:12.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:12.820 Initialization complete. Launching workers. 00:23:12.820 ======================================================== 00:23:12.820 Latency(us) 00:23:12.820 Device Information : IOPS MiB/s Average min max 00:23:12.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6678.79 26.09 9585.58 3131.19 14382.83 00:23:12.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4242.55 16.57 15087.73 5138.66 24483.61 00:23:12.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4276.84 16.71 14963.34 5436.30 24851.35 00:23:12.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6159.74 24.06 10395.84 2837.09 17923.13 00:23:12.820 ======================================================== 00:23:12.820 Total : 21357.91 83.43 11989.09 2837.09 24851.35 00:23:12.820 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.820 rmmod nvme_tcp 00:23:12.820 rmmod nvme_fabrics 00:23:12.820 rmmod nvme_keyring 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2573641 ']' 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2573641 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2573641 ']' 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2573641 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2573641 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2573641' 00:23:12.820 killing process with pid 2573641 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2573641 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2573641 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.820 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.723 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:14.723 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:14.723 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:16.101 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:18.638 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.907 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:23.908 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:23.908 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:23.908 Found net devices under 0000:af:00.0: cvl_0_0 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:23.908 Found net devices under 0000:af:00.1: cvl_0_1 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:23:23.908 00:23:23.908 --- 10.0.0.2 ping statistics --- 00:23:23.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.908 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:23:23.908 00:23:23.908 --- 10.0.0.1 ping statistics --- 00:23:23.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.908 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.908 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:23.909 net.core.busy_poll = 1 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:23.909 net.core.busy_read = 1 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2578303 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2578303 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2578303 ']' 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.909 19:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.909 [2024-07-24 19:00:08.780331] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:23:23.909 [2024-07-24 19:00:08.780387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.909 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.909 [2024-07-24 19:00:08.867590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.167 [2024-07-24 19:00:08.961252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.167 [2024-07-24 19:00:08.961294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.167 [2024-07-24 19:00:08.961305] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.167 [2024-07-24 19:00:08.961315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.167 [2024-07-24 19:00:08.961322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.167 [2024-07-24 19:00:08.961365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.167 [2024-07-24 19:00:08.961477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.167 [2024-07-24 19:00:08.961587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.167 [2024-07-24 19:00:08.961588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.732 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.732 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:24.732 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.732 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.732 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.992 [2024-07-24 19:00:09.937020] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.992 Malloc1 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.992 [2024-07-24 19:00:09.989927] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2578790 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:24.992 19:00:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:25.251 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.172 19:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:27.172 19:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.172 19:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.172 19:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.172 19:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:27.172 "tick_rate": 2200000000, 00:23:27.172 "poll_groups": [ 00:23:27.172 { 00:23:27.172 "name": "nvmf_tgt_poll_group_000", 00:23:27.172 "admin_qpairs": 1, 00:23:27.172 "io_qpairs": 2, 00:23:27.172 "current_admin_qpairs": 1, 00:23:27.172 "current_io_qpairs": 2, 00:23:27.172 "pending_bdev_io": 0, 00:23:27.172 "completed_nvme_io": 16130, 00:23:27.172 "transports": [ 00:23:27.172 { 00:23:27.172 "trtype": "TCP" 00:23:27.172 } 00:23:27.172 ] 00:23:27.172 }, 00:23:27.172 { 00:23:27.172 "name": "nvmf_tgt_poll_group_001", 00:23:27.172 "admin_qpairs": 0, 00:23:27.172 "io_qpairs": 2, 00:23:27.172 "current_admin_qpairs": 0, 00:23:27.172 "current_io_qpairs": 2, 00:23:27.172 "pending_bdev_io": 0, 00:23:27.172 "completed_nvme_io": 10265, 00:23:27.172 "transports": [ 00:23:27.172 { 00:23:27.172 "trtype": "TCP" 00:23:27.172 } 00:23:27.172 ] 00:23:27.172 }, 00:23:27.172 { 00:23:27.172 "name": "nvmf_tgt_poll_group_002", 00:23:27.172 "admin_qpairs": 0, 00:23:27.172 "io_qpairs": 0, 00:23:27.172 "current_admin_qpairs": 0, 00:23:27.172 "current_io_qpairs": 0, 00:23:27.172 "pending_bdev_io": 0, 00:23:27.172 "completed_nvme_io": 0, 00:23:27.172 "transports": [ 00:23:27.172 { 00:23:27.172 "trtype": "TCP" 00:23:27.172 } 00:23:27.172 ] 00:23:27.172 }, 00:23:27.172 { 00:23:27.172 "name": "nvmf_tgt_poll_group_003", 00:23:27.172 "admin_qpairs": 0, 00:23:27.172 "io_qpairs": 0, 00:23:27.172 "current_admin_qpairs": 0, 00:23:27.173 "current_io_qpairs": 0, 00:23:27.173 "pending_bdev_io": 0, 00:23:27.173 "completed_nvme_io": 0, 00:23:27.173 "transports": [ 00:23:27.173 { 00:23:27.173 "trtype": "TCP" 00:23:27.173 } 00:23:27.173 ] 00:23:27.173 } 00:23:27.173 ] 00:23:27.173 }' 00:23:27.173 19:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:27.173 19:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:27.173 19:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:27.173 19:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:27.173 19:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2578790 00:23:35.336 Initializing NVMe Controllers 00:23:35.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:35.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:35.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:35.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:35.336 Initialization complete. Launching workers. 00:23:35.336 ======================================================== 00:23:35.336 Latency(us) 00:23:35.336 Device Information : IOPS MiB/s Average min max 00:23:35.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 2688.97 10.50 23809.84 4866.19 75279.05 00:23:35.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 2752.07 10.75 23261.71 4327.70 73116.22 00:23:35.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3909.05 15.27 16381.32 2431.60 65311.63 00:23:35.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4468.75 17.46 14373.00 2401.74 64158.99 00:23:35.336 ======================================================== 00:23:35.336 Total : 13818.84 53.98 18547.61 2401.74 75279.05 00:23:35.336 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.336 rmmod nvme_tcp 00:23:35.336 rmmod nvme_fabrics 00:23:35.336 rmmod nvme_keyring 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2578303 ']' 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2578303 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2578303 ']' 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2578303 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2578303 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2578303' 00:23:35.336 killing process with pid 2578303 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2578303 00:23:35.336 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2578303 00:23:35.594 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.594 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:35.594 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:35.594 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.594 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.594 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.594 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.594 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:38.883 00:23:38.883 real 0m52.186s 00:23:38.883 user 2m51.369s 00:23:38.883 sys 0m9.335s 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.883 ************************************ 00:23:38.883 END TEST nvmf_perf_adq 00:23:38.883 ************************************ 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:38.883 ************************************ 00:23:38.883 START TEST nvmf_shutdown 00:23:38.883 ************************************ 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:38.883 * Looking for test storage... 00:23:38.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:38.883 ************************************ 00:23:38.883 START TEST nvmf_shutdown_tc1 00:23:38.883 ************************************ 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.883 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:45.453 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:45.453 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:45.453 Found net devices under 0000:af:00.0: cvl_0_0 00:23:45.453 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:45.454 Found net devices under 0000:af:00.1: cvl_0_1 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:45.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:23:45.454 00:23:45.454 --- 10.0.0.2 ping statistics --- 00:23:45.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.454 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:23:45.454 00:23:45.454 --- 10.0.0.1 ping statistics --- 00:23:45.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.454 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2584470 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2584470 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2584470 ']' 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.454 19:00:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:45.454 [2024-07-24 19:00:29.817519] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:23:45.454 [2024-07-24 19:00:29.817582] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.454 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.454 [2024-07-24 19:00:29.906002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:45.454 [2024-07-24 19:00:30.012477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.454 [2024-07-24 19:00:30.012522] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.454 [2024-07-24 19:00:30.012535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.454 [2024-07-24 19:00:30.012546] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.454 [2024-07-24 19:00:30.012556] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.454 [2024-07-24 19:00:30.012699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.454 [2024-07-24 19:00:30.012808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.454 [2024-07-24 19:00:30.012838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:45.454 [2024-07-24 19:00:30.012839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:46.020 [2024-07-24 19:00:30.808439] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:46.020 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.021 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:46.021 Malloc1 00:23:46.021 [2024-07-24 19:00:30.918869] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.021 Malloc2 00:23:46.021 Malloc3 00:23:46.279 Malloc4 00:23:46.279 Malloc5 00:23:46.279 Malloc6 00:23:46.279 Malloc7 00:23:46.279 Malloc8 00:23:46.279 Malloc9 00:23:46.539 Malloc10 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2584782 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2584782 /var/tmp/bdevperf.sock 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2584782 ']' 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.539 { 00:23:46.539 "params": { 00:23:46.539 "name": "Nvme$subsystem", 00:23:46.539 "trtype": "$TEST_TRANSPORT", 00:23:46.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.539 "adrfam": "ipv4", 00:23:46.539 "trsvcid": "$NVMF_PORT", 00:23:46.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.539 "hdgst": ${hdgst:-false}, 00:23:46.539 "ddgst": ${ddgst:-false} 00:23:46.539 }, 00:23:46.539 "method": "bdev_nvme_attach_controller" 00:23:46.539 } 00:23:46.539 EOF 00:23:46.539 )") 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.539 { 00:23:46.539 "params": { 00:23:46.539 "name": "Nvme$subsystem", 00:23:46.539 "trtype": "$TEST_TRANSPORT", 00:23:46.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.539 "adrfam": "ipv4", 00:23:46.539 "trsvcid": "$NVMF_PORT", 00:23:46.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.539 "hdgst": ${hdgst:-false}, 00:23:46.539 "ddgst": ${ddgst:-false} 00:23:46.539 }, 00:23:46.539 "method": "bdev_nvme_attach_controller" 00:23:46.539 } 00:23:46.539 EOF 00:23:46.539 )") 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.539 { 00:23:46.539 "params": { 00:23:46.539 "name": "Nvme$subsystem", 00:23:46.539 "trtype": "$TEST_TRANSPORT", 00:23:46.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.539 "adrfam": "ipv4", 00:23:46.539 "trsvcid": "$NVMF_PORT", 00:23:46.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.539 "hdgst": ${hdgst:-false}, 00:23:46.539 "ddgst": ${ddgst:-false} 00:23:46.539 }, 00:23:46.539 "method": "bdev_nvme_attach_controller" 00:23:46.539 } 00:23:46.539 EOF 00:23:46.539 )") 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.539 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.539 { 00:23:46.539 "params": { 00:23:46.539 "name": "Nvme$subsystem", 00:23:46.539 "trtype": "$TEST_TRANSPORT", 00:23:46.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.539 "adrfam": "ipv4", 00:23:46.539 "trsvcid": "$NVMF_PORT", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.540 "hdgst": ${hdgst:-false}, 00:23:46.540 "ddgst": ${ddgst:-false} 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 } 00:23:46.540 EOF 00:23:46.540 )") 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.540 { 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme$subsystem", 00:23:46.540 "trtype": "$TEST_TRANSPORT", 00:23:46.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "$NVMF_PORT", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.540 "hdgst": ${hdgst:-false}, 00:23:46.540 "ddgst": ${ddgst:-false} 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 } 00:23:46.540 EOF 00:23:46.540 )") 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.540 { 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme$subsystem", 00:23:46.540 "trtype": "$TEST_TRANSPORT", 00:23:46.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "$NVMF_PORT", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.540 "hdgst": ${hdgst:-false}, 00:23:46.540 "ddgst": ${ddgst:-false} 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 } 00:23:46.540 EOF 00:23:46.540 )") 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.540 { 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme$subsystem", 00:23:46.540 "trtype": "$TEST_TRANSPORT", 00:23:46.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "$NVMF_PORT", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.540 "hdgst": ${hdgst:-false}, 00:23:46.540 "ddgst": ${ddgst:-false} 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 } 00:23:46.540 EOF 00:23:46.540 )") 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:46.540 [2024-07-24 19:00:31.414232] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:23:46.540 [2024-07-24 19:00:31.414296] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.540 { 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme$subsystem", 00:23:46.540 "trtype": "$TEST_TRANSPORT", 00:23:46.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "$NVMF_PORT", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.540 "hdgst": ${hdgst:-false}, 00:23:46.540 "ddgst": ${ddgst:-false} 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 } 00:23:46.540 EOF 00:23:46.540 )") 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.540 { 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme$subsystem", 00:23:46.540 "trtype": "$TEST_TRANSPORT", 00:23:46.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "$NVMF_PORT", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.540 "hdgst": ${hdgst:-false}, 00:23:46.540 "ddgst": ${ddgst:-false} 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 } 00:23:46.540 EOF 00:23:46.540 )") 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.540 { 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme$subsystem", 00:23:46.540 "trtype": "$TEST_TRANSPORT", 00:23:46.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "$NVMF_PORT", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.540 "hdgst": ${hdgst:-false}, 00:23:46.540 "ddgst": ${ddgst:-false} 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 } 00:23:46.540 EOF 00:23:46.540 )") 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:46.540 19:00:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme1", 00:23:46.540 "trtype": "tcp", 00:23:46.540 "traddr": "10.0.0.2", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "4420", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.540 "hdgst": false, 00:23:46.540 "ddgst": false 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 },{ 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme2", 00:23:46.540 "trtype": "tcp", 00:23:46.540 "traddr": "10.0.0.2", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "4420", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:46.540 "hdgst": false, 00:23:46.540 "ddgst": false 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 },{ 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme3", 00:23:46.540 "trtype": "tcp", 00:23:46.540 "traddr": "10.0.0.2", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "4420", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:46.540 "hdgst": false, 00:23:46.540 "ddgst": false 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 },{ 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme4", 00:23:46.540 "trtype": "tcp", 00:23:46.540 "traddr": "10.0.0.2", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "4420", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:46.540 "hdgst": false, 00:23:46.540 "ddgst": false 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 },{ 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme5", 00:23:46.540 "trtype": "tcp", 00:23:46.540 "traddr": "10.0.0.2", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "4420", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:46.540 "hdgst": false, 00:23:46.540 "ddgst": false 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 },{ 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme6", 00:23:46.540 "trtype": "tcp", 00:23:46.540 "traddr": "10.0.0.2", 00:23:46.540 "adrfam": "ipv4", 00:23:46.540 "trsvcid": "4420", 00:23:46.540 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:46.540 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:46.540 "hdgst": false, 00:23:46.540 "ddgst": false 00:23:46.540 }, 00:23:46.540 "method": "bdev_nvme_attach_controller" 00:23:46.540 },{ 00:23:46.540 "params": { 00:23:46.540 "name": "Nvme7", 00:23:46.540 "trtype": "tcp", 00:23:46.540 "traddr": "10.0.0.2", 00:23:46.541 "adrfam": "ipv4", 00:23:46.541 "trsvcid": "4420", 00:23:46.541 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:46.541 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:46.541 "hdgst": false, 00:23:46.541 "ddgst": false 00:23:46.541 }, 00:23:46.541 "method": "bdev_nvme_attach_controller" 00:23:46.541 },{ 00:23:46.541 "params": { 00:23:46.541 "name": "Nvme8", 00:23:46.541 "trtype": "tcp", 00:23:46.541 "traddr": "10.0.0.2", 00:23:46.541 "adrfam": "ipv4", 00:23:46.541 "trsvcid": "4420", 00:23:46.541 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:46.541 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:46.541 "hdgst": false, 00:23:46.541 "ddgst": false 00:23:46.541 }, 00:23:46.541 "method": "bdev_nvme_attach_controller" 00:23:46.541 },{ 00:23:46.541 "params": { 00:23:46.541 "name": "Nvme9", 00:23:46.541 "trtype": "tcp", 00:23:46.541 "traddr": "10.0.0.2", 00:23:46.541 "adrfam": "ipv4", 00:23:46.541 "trsvcid": "4420", 00:23:46.541 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:46.541 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:46.541 "hdgst": false, 00:23:46.541 "ddgst": false 00:23:46.541 }, 00:23:46.541 "method": "bdev_nvme_attach_controller" 00:23:46.541 },{ 00:23:46.541 "params": { 00:23:46.541 "name": "Nvme10", 00:23:46.541 "trtype": "tcp", 00:23:46.541 "traddr": "10.0.0.2", 00:23:46.541 "adrfam": "ipv4", 00:23:46.541 "trsvcid": "4420", 00:23:46.541 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:46.541 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:46.541 "hdgst": false, 00:23:46.541 "ddgst": false 00:23:46.541 }, 00:23:46.541 "method": "bdev_nvme_attach_controller" 00:23:46.541 }' 00:23:46.541 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.541 [2024-07-24 19:00:31.496269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.799 [2024-07-24 19:00:31.582594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.184 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.184 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:48.184 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:48.184 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.184 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.184 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.184 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2584782 00:23:48.184 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:48.184 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:49.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2584782 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:49.119 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2584470 00:23:49.119 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:49.119 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:49.119 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:49.119 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:49.119 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.119 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.119 { 00:23:49.119 "params": { 00:23:49.119 "name": "Nvme$subsystem", 00:23:49.119 "trtype": "$TEST_TRANSPORT", 00:23:49.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.119 "adrfam": "ipv4", 00:23:49.119 "trsvcid": "$NVMF_PORT", 00:23:49.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.119 "hdgst": ${hdgst:-false}, 00:23:49.119 "ddgst": ${ddgst:-false} 00:23:49.119 }, 00:23:49.119 "method": "bdev_nvme_attach_controller" 00:23:49.119 } 00:23:49.119 EOF 00:23:49.119 )") 00:23:49.119 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.119 { 00:23:49.119 "params": { 00:23:49.119 "name": "Nvme$subsystem", 00:23:49.119 "trtype": "$TEST_TRANSPORT", 00:23:49.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.119 "adrfam": "ipv4", 00:23:49.119 "trsvcid": "$NVMF_PORT", 00:23:49.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.119 "hdgst": ${hdgst:-false}, 00:23:49.119 "ddgst": ${ddgst:-false} 00:23:49.119 }, 00:23:49.119 "method": "bdev_nvme_attach_controller" 00:23:49.119 } 00:23:49.119 EOF 00:23:49.119 )") 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.119 { 00:23:49.119 "params": { 00:23:49.119 "name": "Nvme$subsystem", 00:23:49.119 "trtype": "$TEST_TRANSPORT", 00:23:49.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.119 "adrfam": "ipv4", 00:23:49.119 "trsvcid": "$NVMF_PORT", 00:23:49.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.119 "hdgst": ${hdgst:-false}, 00:23:49.119 "ddgst": ${ddgst:-false} 00:23:49.119 }, 00:23:49.119 "method": "bdev_nvme_attach_controller" 00:23:49.119 } 00:23:49.119 EOF 00:23:49.119 )") 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.119 { 00:23:49.119 "params": { 00:23:49.119 "name": "Nvme$subsystem", 00:23:49.119 "trtype": "$TEST_TRANSPORT", 00:23:49.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.119 "adrfam": "ipv4", 00:23:49.119 "trsvcid": "$NVMF_PORT", 00:23:49.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.119 "hdgst": ${hdgst:-false}, 00:23:49.119 "ddgst": ${ddgst:-false} 00:23:49.119 }, 00:23:49.119 "method": "bdev_nvme_attach_controller" 00:23:49.119 } 00:23:49.119 EOF 00:23:49.119 )") 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.119 { 00:23:49.119 "params": { 00:23:49.119 "name": "Nvme$subsystem", 00:23:49.119 "trtype": "$TEST_TRANSPORT", 00:23:49.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.119 "adrfam": "ipv4", 00:23:49.119 "trsvcid": "$NVMF_PORT", 00:23:49.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.119 "hdgst": ${hdgst:-false}, 00:23:49.119 "ddgst": ${ddgst:-false} 00:23:49.119 }, 00:23:49.119 "method": "bdev_nvme_attach_controller" 00:23:49.119 } 00:23:49.119 EOF 00:23:49.119 )") 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.119 { 00:23:49.119 "params": { 00:23:49.119 "name": "Nvme$subsystem", 00:23:49.119 "trtype": "$TEST_TRANSPORT", 00:23:49.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.119 "adrfam": "ipv4", 00:23:49.119 "trsvcid": "$NVMF_PORT", 00:23:49.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.119 "hdgst": ${hdgst:-false}, 00:23:49.119 "ddgst": ${ddgst:-false} 00:23:49.119 }, 00:23:49.119 "method": "bdev_nvme_attach_controller" 00:23:49.119 } 00:23:49.119 EOF 00:23:49.119 )") 00:23:49.119 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.120 { 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme$subsystem", 00:23:49.120 "trtype": "$TEST_TRANSPORT", 00:23:49.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "$NVMF_PORT", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.120 "hdgst": ${hdgst:-false}, 00:23:49.120 "ddgst": ${ddgst:-false} 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 } 00:23:49.120 EOF 00:23:49.120 )") 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:49.120 [2024-07-24 19:00:34.041858] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:23:49.120 [2024-07-24 19:00:34.041920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585238 ] 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.120 { 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme$subsystem", 00:23:49.120 "trtype": "$TEST_TRANSPORT", 00:23:49.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "$NVMF_PORT", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.120 "hdgst": ${hdgst:-false}, 00:23:49.120 "ddgst": ${ddgst:-false} 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 } 00:23:49.120 EOF 00:23:49.120 )") 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.120 { 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme$subsystem", 00:23:49.120 "trtype": "$TEST_TRANSPORT", 00:23:49.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "$NVMF_PORT", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.120 "hdgst": ${hdgst:-false}, 00:23:49.120 "ddgst": ${ddgst:-false} 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 } 00:23:49.120 EOF 00:23:49.120 )") 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.120 { 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme$subsystem", 00:23:49.120 "trtype": "$TEST_TRANSPORT", 00:23:49.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "$NVMF_PORT", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.120 "hdgst": ${hdgst:-false}, 00:23:49.120 "ddgst": ${ddgst:-false} 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 } 00:23:49.120 EOF 00:23:49.120 )") 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:49.120 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme1", 00:23:49.120 "trtype": "tcp", 00:23:49.120 "traddr": "10.0.0.2", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "4420", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.120 "hdgst": false, 00:23:49.120 "ddgst": false 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 },{ 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme2", 00:23:49.120 "trtype": "tcp", 00:23:49.120 "traddr": "10.0.0.2", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "4420", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:49.120 "hdgst": false, 00:23:49.120 "ddgst": false 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 },{ 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme3", 00:23:49.120 "trtype": "tcp", 00:23:49.120 "traddr": "10.0.0.2", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "4420", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:49.120 "hdgst": false, 00:23:49.120 "ddgst": false 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 },{ 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme4", 00:23:49.120 "trtype": "tcp", 00:23:49.120 "traddr": "10.0.0.2", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "4420", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:49.120 "hdgst": false, 00:23:49.120 "ddgst": false 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 },{ 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme5", 00:23:49.120 "trtype": "tcp", 00:23:49.120 "traddr": "10.0.0.2", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "4420", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:49.120 "hdgst": false, 00:23:49.120 "ddgst": false 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 },{ 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme6", 00:23:49.120 "trtype": "tcp", 00:23:49.120 "traddr": "10.0.0.2", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "4420", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:49.120 "hdgst": false, 00:23:49.120 "ddgst": false 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 },{ 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme7", 00:23:49.120 "trtype": "tcp", 00:23:49.120 "traddr": "10.0.0.2", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "4420", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:49.120 "hdgst": false, 00:23:49.120 "ddgst": false 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 },{ 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme8", 00:23:49.120 "trtype": "tcp", 00:23:49.120 "traddr": "10.0.0.2", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "4420", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:49.120 "hdgst": false, 00:23:49.120 "ddgst": false 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 },{ 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme9", 00:23:49.120 "trtype": "tcp", 00:23:49.120 "traddr": "10.0.0.2", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "4420", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:49.120 "hdgst": false, 00:23:49.120 "ddgst": false 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 },{ 00:23:49.120 "params": { 00:23:49.120 "name": "Nvme10", 00:23:49.120 "trtype": "tcp", 00:23:49.120 "traddr": "10.0.0.2", 00:23:49.120 "adrfam": "ipv4", 00:23:49.120 "trsvcid": "4420", 00:23:49.120 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:49.120 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:49.120 "hdgst": false, 00:23:49.120 "ddgst": false 00:23:49.120 }, 00:23:49.120 "method": "bdev_nvme_attach_controller" 00:23:49.120 }' 00:23:49.120 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.120 [2024-07-24 19:00:34.126288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.378 [2024-07-24 19:00:34.213754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.753 Running I/O for 1 seconds... 00:23:52.130 00:23:52.130 Latency(us) 00:23:52.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.130 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.130 Verification LBA range: start 0x0 length 0x400 00:23:52.130 Nvme1n1 : 1.09 175.47 10.97 0.00 0.00 360294.09 32648.84 312666.30 00:23:52.130 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.130 Verification LBA range: start 0x0 length 0x400 00:23:52.130 Nvme2n1 : 1.19 161.20 10.07 0.00 0.00 384143.98 73876.95 356515.84 00:23:52.130 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.130 Verification LBA range: start 0x0 length 0x400 00:23:52.130 Nvme3n1 : 1.10 186.57 11.66 0.00 0.00 312938.90 33125.47 314572.80 00:23:52.130 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.130 Verification LBA range: start 0x0 length 0x400 00:23:52.130 Nvme4n1 : 1.11 231.45 14.47 0.00 0.00 255231.30 22997.18 287881.77 00:23:52.130 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.130 Verification LBA range: start 0x0 length 0x400 00:23:52.130 Nvme5n1 : 1.14 168.36 10.52 0.00 0.00 343514.76 53143.74 289788.28 00:23:52.130 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.130 Verification LBA range: start 0x0 length 0x400 00:23:52.130 Nvme6n1 : 1.20 160.02 10.00 0.00 0.00 355859.24 40274.85 348889.83 00:23:52.130 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.130 Verification LBA range: start 0x0 length 0x400 00:23:52.130 Nvme7n1 : 1.20 216.29 13.52 0.00 0.00 256419.22 5987.61 289788.28 00:23:52.130 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.130 Verification LBA range: start 0x0 length 0x400 00:23:52.130 Nvme8n1 : 1.18 217.01 13.56 0.00 0.00 249996.10 23473.80 306946.79 00:23:52.130 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.130 Verification LBA range: start 0x0 length 0x400 00:23:52.130 Nvme9n1 : 1.21 214.28 13.39 0.00 0.00 248292.45 2144.81 305040.29 00:23:52.130 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.130 Verification LBA range: start 0x0 length 0x400 00:23:52.130 Nvme10n1 : 1.23 216.97 13.56 0.00 0.00 239679.26 1735.21 331731.32 00:23:52.130 =================================================================================================================== 00:23:52.130 Total : 1947.63 121.73 0.00 0.00 293142.75 1735.21 356515.84 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:52.130 19:00:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:52.130 rmmod nvme_tcp 00:23:52.130 rmmod nvme_fabrics 00:23:52.130 rmmod nvme_keyring 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2584470 ']' 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2584470 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2584470 ']' 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2584470 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2584470 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2584470' 00:23:52.130 killing process with pid 2584470 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2584470 00:23:52.130 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2584470 00:23:53.065 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.065 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.065 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.065 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.065 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.065 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.065 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.065 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:54.969 00:23:54.969 real 0m15.947s 00:23:54.969 user 0m36.307s 00:23:54.969 sys 0m5.906s 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:54.969 ************************************ 00:23:54.969 END TEST nvmf_shutdown_tc1 00:23:54.969 ************************************ 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:54.969 ************************************ 00:23:54.969 START TEST nvmf_shutdown_tc2 00:23:54.969 ************************************ 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:54.969 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:54.969 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:54.969 Found net devices under 0000:af:00.0: cvl_0_0 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:54.969 Found net devices under 0000:af:00.1: cvl_0_1 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.969 19:00:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:55.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:23:55.227 00:23:55.227 --- 10.0.0.2 ping statistics --- 00:23:55.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.227 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:23:55.227 00:23:55.227 --- 10.0.0.1 ping statistics --- 00:23:55.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.227 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:55.227 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2586487 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2586487 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2586487 ']' 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.228 19:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.485 [2024-07-24 19:00:40.273184] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:23:55.485 [2024-07-24 19:00:40.273224] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.485 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.485 [2024-07-24 19:00:40.347239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.485 [2024-07-24 19:00:40.452594] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.485 [2024-07-24 19:00:40.452648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.485 [2024-07-24 19:00:40.452661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.485 [2024-07-24 19:00:40.452672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.485 [2024-07-24 19:00:40.452681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.485 [2024-07-24 19:00:40.452830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.485 [2024-07-24 19:00:40.452944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.485 [2024-07-24 19:00:40.453055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:55.485 [2024-07-24 19:00:40.453057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.441 [2024-07-24 19:00:41.263393] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.441 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.441 Malloc1 00:23:56.441 [2024-07-24 19:00:41.369704] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.441 Malloc2 00:23:56.441 Malloc3 00:23:56.701 Malloc4 00:23:56.701 Malloc5 00:23:56.701 Malloc6 00:23:56.701 Malloc7 00:23:56.701 Malloc8 00:23:56.960 Malloc9 00:23:56.960 Malloc10 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2586805 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2586805 /var/tmp/bdevperf.sock 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2586805 ']' 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.960 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.960 { 00:23:56.960 "params": { 00:23:56.960 "name": "Nvme$subsystem", 00:23:56.960 "trtype": "$TEST_TRANSPORT", 00:23:56.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.960 "adrfam": "ipv4", 00:23:56.960 "trsvcid": "$NVMF_PORT", 00:23:56.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.960 "hdgst": ${hdgst:-false}, 00:23:56.960 "ddgst": ${ddgst:-false} 00:23:56.960 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 } 00:23:56.961 EOF 00:23:56.961 )") 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.961 { 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme$subsystem", 00:23:56.961 "trtype": "$TEST_TRANSPORT", 00:23:56.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.961 "adrfam": "ipv4", 00:23:56.961 "trsvcid": "$NVMF_PORT", 00:23:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.961 "hdgst": ${hdgst:-false}, 00:23:56.961 "ddgst": ${ddgst:-false} 00:23:56.961 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 } 00:23:56.961 EOF 00:23:56.961 )") 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.961 { 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme$subsystem", 00:23:56.961 "trtype": "$TEST_TRANSPORT", 00:23:56.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.961 "adrfam": "ipv4", 00:23:56.961 "trsvcid": "$NVMF_PORT", 00:23:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.961 "hdgst": ${hdgst:-false}, 00:23:56.961 "ddgst": ${ddgst:-false} 00:23:56.961 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 } 00:23:56.961 EOF 00:23:56.961 )") 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.961 { 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme$subsystem", 00:23:56.961 "trtype": "$TEST_TRANSPORT", 00:23:56.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.961 "adrfam": "ipv4", 00:23:56.961 "trsvcid": "$NVMF_PORT", 00:23:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.961 "hdgst": ${hdgst:-false}, 00:23:56.961 "ddgst": ${ddgst:-false} 00:23:56.961 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 } 00:23:56.961 EOF 00:23:56.961 )") 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.961 { 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme$subsystem", 00:23:56.961 "trtype": "$TEST_TRANSPORT", 00:23:56.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.961 "adrfam": "ipv4", 00:23:56.961 "trsvcid": "$NVMF_PORT", 00:23:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.961 "hdgst": ${hdgst:-false}, 00:23:56.961 "ddgst": ${ddgst:-false} 00:23:56.961 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 } 00:23:56.961 EOF 00:23:56.961 )") 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.961 { 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme$subsystem", 00:23:56.961 "trtype": "$TEST_TRANSPORT", 00:23:56.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.961 "adrfam": "ipv4", 00:23:56.961 "trsvcid": "$NVMF_PORT", 00:23:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.961 "hdgst": ${hdgst:-false}, 00:23:56.961 "ddgst": ${ddgst:-false} 00:23:56.961 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 } 00:23:56.961 EOF 00:23:56.961 )") 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.961 { 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme$subsystem", 00:23:56.961 "trtype": "$TEST_TRANSPORT", 00:23:56.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.961 "adrfam": "ipv4", 00:23:56.961 "trsvcid": "$NVMF_PORT", 00:23:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.961 "hdgst": ${hdgst:-false}, 00:23:56.961 "ddgst": ${ddgst:-false} 00:23:56.961 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 } 00:23:56.961 EOF 00:23:56.961 )") 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:56.961 [2024-07-24 19:00:41.863215] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:23:56.961 [2024-07-24 19:00:41.863274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586805 ] 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.961 { 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme$subsystem", 00:23:56.961 "trtype": "$TEST_TRANSPORT", 00:23:56.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.961 "adrfam": "ipv4", 00:23:56.961 "trsvcid": "$NVMF_PORT", 00:23:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.961 "hdgst": ${hdgst:-false}, 00:23:56.961 "ddgst": ${ddgst:-false} 00:23:56.961 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 } 00:23:56.961 EOF 00:23:56.961 )") 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.961 { 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme$subsystem", 00:23:56.961 "trtype": "$TEST_TRANSPORT", 00:23:56.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.961 "adrfam": "ipv4", 00:23:56.961 "trsvcid": "$NVMF_PORT", 00:23:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.961 "hdgst": ${hdgst:-false}, 00:23:56.961 "ddgst": ${ddgst:-false} 00:23:56.961 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 } 00:23:56.961 EOF 00:23:56.961 )") 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.961 { 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme$subsystem", 00:23:56.961 "trtype": "$TEST_TRANSPORT", 00:23:56.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.961 "adrfam": "ipv4", 00:23:56.961 "trsvcid": "$NVMF_PORT", 00:23:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.961 "hdgst": ${hdgst:-false}, 00:23:56.961 "ddgst": ${ddgst:-false} 00:23:56.961 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 } 00:23:56.961 EOF 00:23:56.961 )") 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:56.961 19:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme1", 00:23:56.961 "trtype": "tcp", 00:23:56.961 "traddr": "10.0.0.2", 00:23:56.961 "adrfam": "ipv4", 00:23:56.961 "trsvcid": "4420", 00:23:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.961 "hdgst": false, 00:23:56.961 "ddgst": false 00:23:56.961 }, 00:23:56.961 "method": "bdev_nvme_attach_controller" 00:23:56.961 },{ 00:23:56.961 "params": { 00:23:56.961 "name": "Nvme2", 00:23:56.961 "trtype": "tcp", 00:23:56.962 "traddr": "10.0.0.2", 00:23:56.962 "adrfam": "ipv4", 00:23:56.962 "trsvcid": "4420", 00:23:56.962 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:56.962 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:56.962 "hdgst": false, 00:23:56.962 "ddgst": false 00:23:56.962 }, 00:23:56.962 "method": "bdev_nvme_attach_controller" 00:23:56.962 },{ 00:23:56.962 "params": { 00:23:56.962 "name": "Nvme3", 00:23:56.962 "trtype": "tcp", 00:23:56.962 "traddr": "10.0.0.2", 00:23:56.962 "adrfam": "ipv4", 00:23:56.962 "trsvcid": "4420", 00:23:56.962 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:56.962 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:56.962 "hdgst": false, 00:23:56.962 "ddgst": false 00:23:56.962 }, 00:23:56.962 "method": "bdev_nvme_attach_controller" 00:23:56.962 },{ 00:23:56.962 "params": { 00:23:56.962 "name": "Nvme4", 00:23:56.962 "trtype": "tcp", 00:23:56.962 "traddr": "10.0.0.2", 00:23:56.962 "adrfam": "ipv4", 00:23:56.962 "trsvcid": "4420", 00:23:56.962 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:56.962 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:56.962 "hdgst": false, 00:23:56.962 "ddgst": false 00:23:56.962 }, 00:23:56.962 "method": "bdev_nvme_attach_controller" 00:23:56.962 },{ 00:23:56.962 "params": { 00:23:56.962 "name": "Nvme5", 00:23:56.962 "trtype": "tcp", 00:23:56.962 "traddr": "10.0.0.2", 00:23:56.962 "adrfam": "ipv4", 00:23:56.962 "trsvcid": "4420", 00:23:56.962 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:56.962 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:56.962 "hdgst": false, 00:23:56.962 "ddgst": false 00:23:56.962 }, 00:23:56.962 "method": "bdev_nvme_attach_controller" 00:23:56.962 },{ 00:23:56.962 "params": { 00:23:56.962 "name": "Nvme6", 00:23:56.962 "trtype": "tcp", 00:23:56.962 "traddr": "10.0.0.2", 00:23:56.962 "adrfam": "ipv4", 00:23:56.962 "trsvcid": "4420", 00:23:56.962 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:56.962 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:56.962 "hdgst": false, 00:23:56.962 "ddgst": false 00:23:56.962 }, 00:23:56.962 "method": "bdev_nvme_attach_controller" 00:23:56.962 },{ 00:23:56.962 "params": { 00:23:56.962 "name": "Nvme7", 00:23:56.962 "trtype": "tcp", 00:23:56.962 "traddr": "10.0.0.2", 00:23:56.962 "adrfam": "ipv4", 00:23:56.962 "trsvcid": "4420", 00:23:56.962 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:56.962 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:56.962 "hdgst": false, 00:23:56.962 "ddgst": false 00:23:56.962 }, 00:23:56.962 "method": "bdev_nvme_attach_controller" 00:23:56.962 },{ 00:23:56.962 "params": { 00:23:56.962 "name": "Nvme8", 00:23:56.962 "trtype": "tcp", 00:23:56.962 "traddr": "10.0.0.2", 00:23:56.962 "adrfam": "ipv4", 00:23:56.962 "trsvcid": "4420", 00:23:56.962 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:56.962 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:56.962 "hdgst": false, 00:23:56.962 "ddgst": false 00:23:56.962 }, 00:23:56.962 "method": "bdev_nvme_attach_controller" 00:23:56.962 },{ 00:23:56.962 "params": { 00:23:56.962 "name": "Nvme9", 00:23:56.962 "trtype": "tcp", 00:23:56.962 "traddr": "10.0.0.2", 00:23:56.962 "adrfam": "ipv4", 00:23:56.962 "trsvcid": "4420", 00:23:56.962 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:56.962 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:56.962 "hdgst": false, 00:23:56.962 "ddgst": false 00:23:56.962 }, 00:23:56.962 "method": "bdev_nvme_attach_controller" 00:23:56.962 },{ 00:23:56.962 "params": { 00:23:56.962 "name": "Nvme10", 00:23:56.962 "trtype": "tcp", 00:23:56.962 "traddr": "10.0.0.2", 00:23:56.962 "adrfam": "ipv4", 00:23:56.962 "trsvcid": "4420", 00:23:56.962 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:56.962 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:56.962 "hdgst": false, 00:23:56.962 "ddgst": false 00:23:56.962 }, 00:23:56.962 "method": "bdev_nvme_attach_controller" 00:23:56.962 }' 00:23:56.962 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.962 [2024-07-24 19:00:41.944477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.220 [2024-07-24 19:00:42.033381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.604 Running I/O for 10 seconds... 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:59.172 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:59.432 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:59.432 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:59.432 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:59.432 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:59.432 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.432 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.432 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.432 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:59.432 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:59.432 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2586805 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2586805 ']' 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2586805 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2586805 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2586805' 00:23:59.690 killing process with pid 2586805 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2586805 00:23:59.690 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2586805 00:23:59.949 Received shutdown signal, test time was about 1.119670 seconds 00:23:59.949 00:23:59.949 Latency(us) 00:23:59.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.949 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.949 Verification LBA range: start 0x0 length 0x400 00:23:59.949 Nvme1n1 : 1.04 183.90 11.49 0.00 0.00 343238.28 30146.56 329824.81 00:23:59.949 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.949 Verification LBA range: start 0x0 length 0x400 00:23:59.949 Nvme2n1 : 1.05 182.48 11.40 0.00 0.00 337869.11 31218.97 270723.26 00:23:59.949 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.949 Verification LBA range: start 0x0 length 0x400 00:23:59.949 Nvme3n1 : 1.02 187.81 11.74 0.00 0.00 320408.36 31933.91 303133.79 00:23:59.949 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.949 Verification LBA range: start 0x0 length 0x400 00:23:59.949 Nvme4n1 : 1.04 246.50 15.41 0.00 0.00 238403.96 23116.33 272629.76 00:23:59.949 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.949 Verification LBA range: start 0x0 length 0x400 00:23:59.949 Nvme5n1 : 1.12 171.63 10.73 0.00 0.00 323963.81 21448.15 387019.87 00:23:59.949 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.949 Verification LBA range: start 0x0 length 0x400 00:23:59.949 Nvme6n1 : 1.08 177.95 11.12 0.00 0.00 315517.36 35270.28 331731.32 00:23:59.949 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.949 Verification LBA range: start 0x0 length 0x400 00:23:59.949 Nvme7n1 : 1.03 186.02 11.63 0.00 0.00 292106.86 31933.91 287881.77 00:23:59.949 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.949 Verification LBA range: start 0x0 length 0x400 00:23:59.949 Nvme8n1 : 1.02 188.63 11.79 0.00 0.00 279758.66 45756.04 308853.29 00:23:59.949 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.949 Verification LBA range: start 0x0 length 0x400 00:23:59.949 Nvme9n1 : 1.05 182.70 11.42 0.00 0.00 281240.36 15609.48 314572.80 00:23:59.949 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.949 Verification LBA range: start 0x0 length 0x400 00:23:59.949 Nvme10n1 : 1.12 172.19 10.76 0.00 0.00 283384.24 20494.89 326011.81 00:23:59.949 =================================================================================================================== 00:23:59.950 Total : 1879.82 117.49 0.00 0.00 299550.87 15609.48 387019.87 00:24:00.208 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2586487 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.143 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.143 rmmod nvme_tcp 00:24:01.143 rmmod nvme_fabrics 00:24:01.143 rmmod nvme_keyring 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2586487 ']' 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2586487 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2586487 ']' 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2586487 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2586487 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2586487' 00:24:01.143 killing process with pid 2586487 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2586487 00:24:01.143 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2586487 00:24:01.710 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.710 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.710 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.710 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.710 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.710 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.710 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.710 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.622 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.622 00:24:03.622 real 0m8.732s 00:24:03.622 user 0m27.460s 00:24:03.622 sys 0m1.473s 00:24:03.622 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:03.622 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.622 ************************************ 00:24:03.622 END TEST nvmf_shutdown_tc2 00:24:03.622 ************************************ 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:03.882 ************************************ 00:24:03.882 START TEST nvmf_shutdown_tc3 00:24:03.882 ************************************ 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:03.882 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:03.882 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:03.882 Found net devices under 0000:af:00.0: cvl_0_0 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:03.882 Found net devices under 0000:af:00.1: cvl_0_1 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:03.882 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:04.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:24:04.140 00:24:04.140 --- 10.0.0.2 ping statistics --- 00:24:04.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.140 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:24:04.140 00:24:04.140 --- 10.0.0.1 ping statistics --- 00:24:04.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.140 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:04.140 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2588115 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2588115 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2588115 ']' 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.140 19:00:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:04.140 [2024-07-24 19:00:49.073362] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:24:04.140 [2024-07-24 19:00:49.073419] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.140 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.415 [2024-07-24 19:00:49.160460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.415 [2024-07-24 19:00:49.270130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.415 [2024-07-24 19:00:49.270174] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.416 [2024-07-24 19:00:49.270187] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.416 [2024-07-24 19:00:49.270199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.416 [2024-07-24 19:00:49.270208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.416 [2024-07-24 19:00:49.270276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.416 [2024-07-24 19:00:49.270388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.416 [2024-07-24 19:00:49.270500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:04.416 [2024-07-24 19:00:49.270502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.377 [2024-07-24 19:00:50.061469] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.377 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.377 Malloc1 00:24:05.377 [2024-07-24 19:00:50.168715] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.377 Malloc2 00:24:05.377 Malloc3 00:24:05.377 Malloc4 00:24:05.377 Malloc5 00:24:05.635 Malloc6 00:24:05.635 Malloc7 00:24:05.635 Malloc8 00:24:05.635 Malloc9 00:24:05.635 Malloc10 00:24:05.635 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.635 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:05.635 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:05.635 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2588494 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2588494 /var/tmp/bdevperf.sock 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2588494 ']' 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.896 { 00:24:05.896 "params": { 00:24:05.896 "name": "Nvme$subsystem", 00:24:05.896 "trtype": "$TEST_TRANSPORT", 00:24:05.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.896 "adrfam": "ipv4", 00:24:05.896 "trsvcid": "$NVMF_PORT", 00:24:05.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.896 "hdgst": ${hdgst:-false}, 00:24:05.896 "ddgst": ${ddgst:-false} 00:24:05.896 }, 00:24:05.896 "method": "bdev_nvme_attach_controller" 00:24:05.896 } 00:24:05.896 EOF 00:24:05.896 )") 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.896 { 00:24:05.896 "params": { 00:24:05.896 "name": "Nvme$subsystem", 00:24:05.896 "trtype": "$TEST_TRANSPORT", 00:24:05.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.896 "adrfam": "ipv4", 00:24:05.896 "trsvcid": "$NVMF_PORT", 00:24:05.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.896 "hdgst": ${hdgst:-false}, 00:24:05.896 "ddgst": ${ddgst:-false} 00:24:05.896 }, 00:24:05.896 "method": "bdev_nvme_attach_controller" 00:24:05.896 } 00:24:05.896 EOF 00:24:05.896 )") 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.896 { 00:24:05.896 "params": { 00:24:05.896 "name": "Nvme$subsystem", 00:24:05.896 "trtype": "$TEST_TRANSPORT", 00:24:05.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.896 "adrfam": "ipv4", 00:24:05.896 "trsvcid": "$NVMF_PORT", 00:24:05.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.896 "hdgst": ${hdgst:-false}, 00:24:05.896 "ddgst": ${ddgst:-false} 00:24:05.896 }, 00:24:05.896 "method": "bdev_nvme_attach_controller" 00:24:05.896 } 00:24:05.896 EOF 00:24:05.896 )") 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.896 { 00:24:05.896 "params": { 00:24:05.896 "name": "Nvme$subsystem", 00:24:05.896 "trtype": "$TEST_TRANSPORT", 00:24:05.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.896 "adrfam": "ipv4", 00:24:05.896 "trsvcid": "$NVMF_PORT", 00:24:05.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.896 "hdgst": ${hdgst:-false}, 00:24:05.896 "ddgst": ${ddgst:-false} 00:24:05.896 }, 00:24:05.896 "method": "bdev_nvme_attach_controller" 00:24:05.896 } 00:24:05.896 EOF 00:24:05.896 )") 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.896 { 00:24:05.896 "params": { 00:24:05.896 "name": "Nvme$subsystem", 00:24:05.896 "trtype": "$TEST_TRANSPORT", 00:24:05.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.896 "adrfam": "ipv4", 00:24:05.896 "trsvcid": "$NVMF_PORT", 00:24:05.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.896 "hdgst": ${hdgst:-false}, 00:24:05.896 "ddgst": ${ddgst:-false} 00:24:05.896 }, 00:24:05.896 "method": "bdev_nvme_attach_controller" 00:24:05.896 } 00:24:05.896 EOF 00:24:05.896 )") 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.896 { 00:24:05.896 "params": { 00:24:05.896 "name": "Nvme$subsystem", 00:24:05.896 "trtype": "$TEST_TRANSPORT", 00:24:05.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.896 "adrfam": "ipv4", 00:24:05.896 "trsvcid": "$NVMF_PORT", 00:24:05.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.896 "hdgst": ${hdgst:-false}, 00:24:05.896 "ddgst": ${ddgst:-false} 00:24:05.896 }, 00:24:05.896 "method": "bdev_nvme_attach_controller" 00:24:05.896 } 00:24:05.896 EOF 00:24:05.896 )") 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.896 { 00:24:05.896 "params": { 00:24:05.896 "name": "Nvme$subsystem", 00:24:05.896 "trtype": "$TEST_TRANSPORT", 00:24:05.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.896 "adrfam": "ipv4", 00:24:05.896 "trsvcid": "$NVMF_PORT", 00:24:05.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.896 "hdgst": ${hdgst:-false}, 00:24:05.896 "ddgst": ${ddgst:-false} 00:24:05.896 }, 00:24:05.896 "method": "bdev_nvme_attach_controller" 00:24:05.896 } 00:24:05.896 EOF 00:24:05.896 )") 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:05.896 [2024-07-24 19:00:50.709825] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:24:05.896 [2024-07-24 19:00:50.709888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588494 ] 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.896 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.896 { 00:24:05.896 "params": { 00:24:05.896 "name": "Nvme$subsystem", 00:24:05.896 "trtype": "$TEST_TRANSPORT", 00:24:05.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.896 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "$NVMF_PORT", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.897 "hdgst": ${hdgst:-false}, 00:24:05.897 "ddgst": ${ddgst:-false} 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 } 00:24:05.897 EOF 00:24:05.897 )") 00:24:05.897 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:05.897 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.897 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.897 { 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme$subsystem", 00:24:05.897 "trtype": "$TEST_TRANSPORT", 00:24:05.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "$NVMF_PORT", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.897 "hdgst": ${hdgst:-false}, 00:24:05.897 "ddgst": ${ddgst:-false} 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 } 00:24:05.897 EOF 00:24:05.897 )") 00:24:05.897 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:05.897 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.897 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.897 { 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme$subsystem", 00:24:05.897 "trtype": "$TEST_TRANSPORT", 00:24:05.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "$NVMF_PORT", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.897 "hdgst": ${hdgst:-false}, 00:24:05.897 "ddgst": ${ddgst:-false} 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 } 00:24:05.897 EOF 00:24:05.897 )") 00:24:05.897 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:05.897 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:05.897 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:05.897 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme1", 00:24:05.897 "trtype": "tcp", 00:24:05.897 "traddr": "10.0.0.2", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "4420", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:05.897 "hdgst": false, 00:24:05.897 "ddgst": false 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 },{ 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme2", 00:24:05.897 "trtype": "tcp", 00:24:05.897 "traddr": "10.0.0.2", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "4420", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:05.897 "hdgst": false, 00:24:05.897 "ddgst": false 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 },{ 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme3", 00:24:05.897 "trtype": "tcp", 00:24:05.897 "traddr": "10.0.0.2", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "4420", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:05.897 "hdgst": false, 00:24:05.897 "ddgst": false 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 },{ 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme4", 00:24:05.897 "trtype": "tcp", 00:24:05.897 "traddr": "10.0.0.2", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "4420", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:05.897 "hdgst": false, 00:24:05.897 "ddgst": false 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 },{ 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme5", 00:24:05.897 "trtype": "tcp", 00:24:05.897 "traddr": "10.0.0.2", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "4420", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:05.897 "hdgst": false, 00:24:05.897 "ddgst": false 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 },{ 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme6", 00:24:05.897 "trtype": "tcp", 00:24:05.897 "traddr": "10.0.0.2", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "4420", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:05.897 "hdgst": false, 00:24:05.897 "ddgst": false 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 },{ 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme7", 00:24:05.897 "trtype": "tcp", 00:24:05.897 "traddr": "10.0.0.2", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "4420", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:05.897 "hdgst": false, 00:24:05.897 "ddgst": false 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 },{ 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme8", 00:24:05.897 "trtype": "tcp", 00:24:05.897 "traddr": "10.0.0.2", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "4420", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:05.897 "hdgst": false, 00:24:05.897 "ddgst": false 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 },{ 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme9", 00:24:05.897 "trtype": "tcp", 00:24:05.897 "traddr": "10.0.0.2", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "4420", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:05.897 "hdgst": false, 00:24:05.897 "ddgst": false 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 },{ 00:24:05.897 "params": { 00:24:05.897 "name": "Nvme10", 00:24:05.897 "trtype": "tcp", 00:24:05.897 "traddr": "10.0.0.2", 00:24:05.897 "adrfam": "ipv4", 00:24:05.897 "trsvcid": "4420", 00:24:05.897 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:05.897 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:05.897 "hdgst": false, 00:24:05.897 "ddgst": false 00:24:05.897 }, 00:24:05.897 "method": "bdev_nvme_attach_controller" 00:24:05.897 }' 00:24:05.897 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.897 [2024-07-24 19:00:50.792172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.897 [2024-07-24 19:00:50.878421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.803 Running I/O for 10 seconds... 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.803 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:07.804 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.063 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.063 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:08.063 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:08.063 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:08.323 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:08.323 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:08.323 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:08.323 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:08.323 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.323 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.323 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.323 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:08.323 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:08.323 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2588115 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2588115 ']' 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2588115 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:08.582 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2588115 00:24:08.855 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:08.855 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:08.855 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2588115' 00:24:08.855 killing process with pid 2588115 00:24:08.855 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2588115 00:24:08.855 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2588115 00:24:08.855 [2024-07-24 19:00:53.601032] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601129] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601154] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601175] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601195] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601214] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601234] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601253] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601272] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601291] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601310] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601329] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601348] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601367] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601386] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601405] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601424] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601444] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601463] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601481] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601512] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601531] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601550] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601569] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601588] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601619] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601639] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601658] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601676] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601695] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601713] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601732] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601750] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601769] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601789] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601807] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601830] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601849] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601868] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601888] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601906] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601924] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601943] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.855 [2024-07-24 19:00:53.601961] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.601981] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.601999] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602018] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602040] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602059] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602078] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602098] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602116] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602135] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602154] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602171] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602192] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602210] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602229] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602248] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602267] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602288] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602307] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.602326] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700450 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.606175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.856 [2024-07-24 19:00:53.606216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.856 [2024-07-24 19:00:53.606230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.856 [2024-07-24 19:00:53.606242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.856 [2024-07-24 19:00:53.606254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.856 [2024-07-24 19:00:53.606265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.856 [2024-07-24 19:00:53.606275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.856 [2024-07-24 19:00:53.606287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.856 [2024-07-24 19:00:53.606298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d7120 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.608804] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:08.856 [2024-07-24 19:00:53.622896] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.622969] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.622994] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623014] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623034] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623053] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623072] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623091] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623110] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623129] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623148] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623167] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623186] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623204] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623223] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623241] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623259] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623278] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623297] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623315] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623335] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623353] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623372] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623391] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623409] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623428] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623448] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623465] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623496] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623515] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623534] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623554] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623572] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623592] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623623] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623642] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623662] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623681] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d7120 (9): Bad file descriptor 00:24:08.856 [2024-07-24 19:00:53.623700] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623721] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623740] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623759] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623778] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.856 [2024-07-24 19:00:53.623796] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.856 [2024-07-24 19:00:53.623816] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with [2024-07-24 19:00:53.623822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:24:08.856 id:0 cdw10:00000000 cdw11:00000000 00:24:08.856 [2024-07-24 19:00:53.623836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.856 [2024-07-24 19:00:53.623838] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.856 [2024-07-24 19:00:53.623859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.856 [2024-07-24 19:00:53.623858] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.856 [2024-07-24 19:00:53.623871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.857 [2024-07-24 19:00:53.623880] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.623887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.623899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262b6f0 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.623900] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.623921] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.623955] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.623974] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.623994] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624013] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624032] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624050] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624069] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624088] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624106] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624124] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624143] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624162] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624182] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700910 is same with the state(6) to be set 00:24:08.857 [2024-07-24 19:00:53.624936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.624963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.624983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.624994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.857 [2024-07-24 19:00:53.625661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.857 [2024-07-24 19:00:53.625671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.625979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.625991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.626376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.626817] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x264bff0 was disconnected and freed. reset controller. 00:24:08.858 [2024-07-24 19:00:53.628556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:08.858 [2024-07-24 19:00:53.628593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x262b6f0 (9): Bad file descriptor 00:24:08.858 [2024-07-24 19:00:53.629737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-07-24 19:00:53.629768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x262b6f0 with addr=10.0.0.2, port=4420 00:24:08.858 [2024-07-24 19:00:53.629781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262b6f0 is same with the state(6) to be set 00:24:08.858 [2024-07-24 19:00:53.630818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x262b6f0 (9): Bad file descriptor 00:24:08.858 [2024-07-24 19:00:53.631229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:08.858 [2024-07-24 19:00:53.631251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:08.858 [2024-07-24 19:00:53.631262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:08.858 [2024-07-24 19:00:53.631312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.631326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.631342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.631352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.631364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.631374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.631386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.631396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.631409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.631418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.631430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.631440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.631452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.631466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.631479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.631489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.631502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.858 [2024-07-24 19:00:53.631512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.858 [2024-07-24 19:00:53.631524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.631981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.631991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.859 [2024-07-24 19:00:53.632641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.859 [2024-07-24 19:00:53.632651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.632663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.632672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.632686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.632695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.632707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.632716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.632728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.632737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.632747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x278bb40 is same with the state(6) to be set 00:24:08.860 [2024-07-24 19:00:53.632805] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x278bb40 was disconnected and freed. reset controller. 00:24:08.860 [2024-07-24 19:00:53.633817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.860 [2024-07-24 19:00:53.633946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.860 [2024-07-24 19:00:53.633963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.633974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.860 [2024-07-24 19:00:53.633984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.633994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.860 [2024-07-24 19:00:53.634003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.634014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.860 [2024-07-24 19:00:53.634023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.634032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a2dc0 is same with the state(6) to be set 00:24:08.860 [2024-07-24 19:00:53.635913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:08.860 [2024-07-24 19:00:53.635948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a2dc0 (9): Bad file descriptor 00:24:08.860 [2024-07-24 19:00:53.636001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.860 [2024-07-24 19:00:53.636790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.860 [2024-07-24 19:00:53.636801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.636811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.636823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.636832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.636844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.636853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.636864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.636874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.636886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.636896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.636908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.636917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.636932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.636942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.636954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.636963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.636975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.636985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.636997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.637018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.637040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.637061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.637083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.637105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.637125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.637147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.637168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.637189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.637212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.637222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.646022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.646040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.646057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.646069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.646087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.646100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.646115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.646128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.646143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.646156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.646171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.646184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.646199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.646212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.646228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.646240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.646255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.861 [2024-07-24 19:00:53.646269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.861 [2024-07-24 19:00:53.646284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2652af0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.646399] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700dd0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.646460] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700dd0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.646483] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700dd0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.647921] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.647961] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.647976] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.647989] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648000] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648011] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648023] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648034] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648046] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648057] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648068] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648079] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648090] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648101] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648112] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648124] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648134] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648145] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648156] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648167] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648178] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648194] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648205] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648216] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648228] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.861 [2024-07-24 19:00:53.648239] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648250] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648267] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648279] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648290] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648301] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648312] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648324] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648335] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648346] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648357] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648369] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648379] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648390] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648401] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648413] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648424] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648435] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648446] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648457] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648468] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648480] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648491] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648502] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648513] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648524] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648535] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648547] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648558] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648572] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648583] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648594] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648613] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648625] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648636] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648647] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648659] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.648670] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17012b0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.649217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.862 [2024-07-24 19:00:53.649348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.862 [2024-07-24 19:00:53.649369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.649384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.862 [2024-07-24 19:00:53.649398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.649411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.862 [2024-07-24 19:00:53.649424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.649438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.862 [2024-07-24 19:00:53.649449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.649462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa050 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.649502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.862 [2024-07-24 19:00:53.649518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.649531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.862 [2024-07-24 19:00:53.649544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.649557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.862 [2024-07-24 19:00:53.649570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.649583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.862 [2024-07-24 19:00:53.649613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.649627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2602d90 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.651085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.862 [2024-07-24 19:00:53.651120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27a2dc0 with addr=10.0.0.2, port=4420 00:24:08.862 [2024-07-24 19:00:53.651134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a2dc0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.651265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.862 [2024-07-24 19:00:53.651283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d7120 with addr=10.0.0.2, port=4420 00:24:08.862 [2024-07-24 19:00:53.651295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d7120 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.652302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:08.862 [2024-07-24 19:00:53.652353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a2dc0 (9): Bad file descriptor 00:24:08.862 [2024-07-24 19:00:53.652373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d7120 (9): Bad file descriptor 00:24:08.862 [2024-07-24 19:00:53.653076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.862 [2024-07-24 19:00:53.653110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x262b6f0 with addr=10.0.0.2, port=4420 00:24:08.862 [2024-07-24 19:00:53.653124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262b6f0 is same with the state(6) to be set 00:24:08.862 [2024-07-24 19:00:53.653138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:08.862 [2024-07-24 19:00:53.653150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:08.862 [2024-07-24 19:00:53.653164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:08.862 [2024-07-24 19:00:53.653186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.862 [2024-07-24 19:00:53.653198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.862 [2024-07-24 19:00:53.653210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.862 [2024-07-24 19:00:53.653288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.862 [2024-07-24 19:00:53.653306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.653326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.862 [2024-07-24 19:00:53.653341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.653358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.862 [2024-07-24 19:00:53.653371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.653388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.862 [2024-07-24 19:00:53.653401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.653422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.862 [2024-07-24 19:00:53.653435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.862 [2024-07-24 19:00:53.653451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.862 [2024-07-24 19:00:53.653464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.863 [2024-07-24 19:00:53.653480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.863 [2024-07-24 19:00:53.653494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.863 [2024-07-24 19:00:53.653510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.863 [2024-07-24 19:00:53.653523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.863 [2024-07-24 19:00:53.653539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.863 [2024-07-24 19:00:53.653552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.863 [2024-07-24 19:00:53.653567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26542b0 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.653647] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26542b0 was disconnected and freed. reset controller. 00:24:08.863 [2024-07-24 19:00:53.654090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.863 [2024-07-24 19:00:53.654117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.863 [2024-07-24 19:00:53.654138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x262b6f0 (9): Bad file descriptor 00:24:08.863 [2024-07-24 19:00:53.655583] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:08.863 [2024-07-24 19:00:53.655995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:08.863 [2024-07-24 19:00:53.656028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2602d90 (9): Bad file descriptor 00:24:08.863 [2024-07-24 19:00:53.656047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:08.863 [2024-07-24 19:00:53.656059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:08.863 [2024-07-24 19:00:53.656071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:08.863 [2024-07-24 19:00:53.656373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.863 [2024-07-24 19:00:53.656525] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656583] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656622] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656645] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656665] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656687] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656722] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656743] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656764] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656784] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656804] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656824] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656844] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656864] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656884] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656904] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.863 [2024-07-24 19:00:53.656924] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.656944] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.656963] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.656983] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657004] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657025] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657046] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657066] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657086] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657106] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657126] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657147] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657166] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657187] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657206] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657227] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.864 [2024-07-24 19:00:53.657246] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with [2024-07-24 19:00:53.657243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.865 the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657275] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with [2024-07-24 19:00:53.657278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2602d90 witthe state(6) to be set 00:24:08.865 h addr=10.0.0.2, port=4420 00:24:08.865 [2024-07-24 19:00:53.657297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2602d90 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657298] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657320] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657340] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657381] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657401] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657421] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657442] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657462] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657482] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657502] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657521] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657540] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2602d90 (9): Bad file descriptor 00:24:08.865 [2024-07-24 19:00:53.657559] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657580] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657599] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657629] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657648] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657668] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657689] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657709] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657728] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657748] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657768] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657791] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:08.865 [2024-07-24 19:00:53.657812] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with [2024-07-24 19:00:53.657816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] contrthe state(6) to be set 00:24:08.865 oller reinitialization failed 00:24:08.865 [2024-07-24 19:00:53.657834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:08.865 [2024-07-24 19:00:53.657835] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657855] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657874] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.657893] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701770 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.658056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.865 [2024-07-24 19:00:53.659445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25fa050 (9): Bad file descriptor 00:24:08.865 [2024-07-24 19:00:53.659964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.865 [2024-07-24 19:00:53.659994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:08.865 [2024-07-24 19:00:53.660428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.865 [2024-07-24 19:00:53.660456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d7120 with addr=10.0.0.2, port=4420 00:24:08.865 [2024-07-24 19:00:53.660470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d7120 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.660676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.865 [2024-07-24 19:00:53.660697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27a2dc0 with addr=10.0.0.2, port=4420 00:24:08.865 [2024-07-24 19:00:53.660711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a2dc0 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.660996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d7120 (9): Bad file descriptor 00:24:08.865 [2024-07-24 19:00:53.661026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a2dc0 (9): Bad file descriptor 00:24:08.865 [2024-07-24 19:00:53.661295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.865 [2024-07-24 19:00:53.661315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.865 [2024-07-24 19:00:53.661329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.865 [2024-07-24 19:00:53.661349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:08.865 [2024-07-24 19:00:53.661362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:08.865 [2024-07-24 19:00:53.661374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:08.865 [2024-07-24 19:00:53.661647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.865 [2024-07-24 19:00:53.661667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.865 [2024-07-24 19:00:53.662963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:08.865 [2024-07-24 19:00:53.663444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.865 [2024-07-24 19:00:53.663474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x262b6f0 with addr=10.0.0.2, port=4420 00:24:08.865 [2024-07-24 19:00:53.663488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262b6f0 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.663769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x262b6f0 (9): Bad file descriptor 00:24:08.865 [2024-07-24 19:00:53.664035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:08.865 [2024-07-24 19:00:53.664052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:08.865 [2024-07-24 19:00:53.664062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:08.865 [2024-07-24 19:00:53.664294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.865 [2024-07-24 19:00:53.666912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:08.865 [2024-07-24 19:00:53.667281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.865 [2024-07-24 19:00:53.667306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2602d90 with addr=10.0.0.2, port=4420 00:24:08.865 [2024-07-24 19:00:53.667318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2602d90 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.667542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2602d90 (9): Bad file descriptor 00:24:08.865 [2024-07-24 19:00:53.667784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:08.865 [2024-07-24 19:00:53.667801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:08.865 [2024-07-24 19:00:53.667813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:08.865 [2024-07-24 19:00:53.668177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.865 [2024-07-24 19:00:53.670251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.865 [2024-07-24 19:00:53.670276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.865 [2024-07-24 19:00:53.670289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.865 [2024-07-24 19:00:53.670300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.865 [2024-07-24 19:00:53.670310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.865 [2024-07-24 19:00:53.670320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.865 [2024-07-24 19:00:53.670331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.865 [2024-07-24 19:00:53.670342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.865 [2024-07-24 19:00:53.670352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a4bb0 is same with the state(6) to be set 00:24:08.865 [2024-07-24 19:00:53.670441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.865 [2024-07-24 19:00:53.670456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.865 [2024-07-24 19:00:53.670476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.865 [2024-07-24 19:00:53.670486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.670985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.670996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.866 [2024-07-24 19:00:53.671397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.866 [2024-07-24 19:00:53.671406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.671926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.671936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d1430 is same with the state(6) to be set 00:24:08.867 [2024-07-24 19:00:53.671993] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25d1430 was disconnected and freed. reset controller. 00:24:08.867 [2024-07-24 19:00:53.672251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.867 [2024-07-24 19:00:53.672649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.867 [2024-07-24 19:00:53.672661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672903] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.672934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672953] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.672958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:12the state(6) to be set 00:24:08.868 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.672974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672977] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.672987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.672998] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673019] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673039] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673068] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:12the state(6) to be set 00:24:08.868 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673091] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673112] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673132] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:08.868 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673154] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673176] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673195] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673215] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:08.868 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673236] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673256] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:12the state(6) to be set 00:24:08.868 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673279] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673299] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673321] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:08.868 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673347] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:08.868 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673369] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.868 [2024-07-24 19:00:53.673380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.868 [2024-07-24 19:00:53.673389] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:12the state(6) to be set 00:24:08.868 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.868 [2024-07-24 19:00:53.673407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673409] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673429] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:08.869 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673450] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673470] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:12the state(6) to be set 00:24:08.869 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673496] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673515] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673536] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:08.869 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673558] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673578] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673598] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673627] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:00:53.673651] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673673] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:12[2024-07-24 19:00:53.673694] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673718] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673737] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673758] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:08.869 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673779] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673799] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with [2024-07-24 19:00:53.673804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:12the state(6) to be set 00:24:08.869 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673820] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:00:53.673839] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.869 [2024-07-24 19:00:53.673861] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.869 [2024-07-24 19:00:53.673883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2655830 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673881] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673903] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673922] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673943] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673962] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673981] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.673999] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.674020] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.674038] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.869 [2024-07-24 19:00:53.674056] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.674074] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.674093] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701c50 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.676143] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702110 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.676978] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677008] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677020] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677032] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677043] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677055] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677066] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677077] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677088] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677099] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677111] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677111] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2f042c0 was disconnected and freed. reset controller. 00:24:08.870 [2024-07-24 19:00:53.677122] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677134] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677145] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677157] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677168] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677185] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677213] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677224] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677236] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677246] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677258] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:08.870 [2024-07-24 19:00:53.677270] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.870 [2024-07-24 19:00:53.677281] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:08.870 [2024-07-24 19:00:53.677293] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:08.870 [2024-07-24 19:00:53.677305] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677317] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a4bb0 (9): Bad file descriptor 00:24:08.870 [2024-07-24 19:00:53.677328] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677341] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677351] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677363] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677373] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677384] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677395] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677406] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677418] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677428] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677439] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677450] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677463] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677475] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677485] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677496] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677508] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677518] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677530] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677540] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677551] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677562] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677573] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677584] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677595] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677613] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677626] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677638] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677648] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677659] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677670] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677681] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controlle[2024-07-24 19:00:53.677692] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with r 00:24:08.870 the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677706] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677718] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677729] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17025d0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.677739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269c660 (9): Bad file descriptor 00:24:08.870 [2024-07-24 19:00:53.677998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.870 [2024-07-24 19:00:53.678019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27a2dc0 with addr=10.0.0.2, port=4420 00:24:08.870 [2024-07-24 19:00:53.678029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a2dc0 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.678152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.870 [2024-07-24 19:00:53.678167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d7120 with addr=10.0.0.2, port=4420 00:24:08.870 [2024-07-24 19:00:53.678176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d7120 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.678295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.870 [2024-07-24 19:00:53.678309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25fa050 with addr=10.0.0.2, port=4420 00:24:08.870 [2024-07-24 19:00:53.678318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa050 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.678746] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.678794] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.678816] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.870 [2024-07-24 19:00:53.678836] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.678854] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.678873] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.678891] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.678910] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.678930] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.678948] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.678967] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.678986] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679006] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679025] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679043] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679062] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679081] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679098] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679117] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controll[2024-07-24 19:00:53.679136] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with er 00:24:08.871 the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:08.871 [2024-07-24 19:00:53.679159] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679182] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679200] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679219] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679239] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679257] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679275] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679294] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679312] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679331] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679350] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679368] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679387] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679414] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679433] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.871 [2024-07-24 19:00:53.679453] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with [2024-07-24 19:00:53.679456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a4bb0 witthe state(6) to be set 00:24:08.871 h addr=10.0.0.2, port=4420 00:24:08.871 [2024-07-24 19:00:53.679471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a4bb0 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679473] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a2dc0 (9): Bad file descriptor 00:24:08.871 [2024-07-24 19:00:53.679492] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d7120 (9): Bad file descriptor 00:24:08.871 [2024-07-24 19:00:53.679511] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25fa050 (9): Bad file descriptor 00:24:08.871 [2024-07-24 19:00:53.679531] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679558] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679576] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679595] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679625] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679644] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679663] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679681] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679699] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679718] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679736] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679754] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679774] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679791] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679809] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679828] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679847] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679865] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679883] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679901] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.871 [2024-07-24 19:00:53.679920] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x269c660 with addr=10.0.0.2, port=4420 00:24:08.871 [2024-07-24 19:00:53.679940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269c660 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679939] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679962] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.679981] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702a90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.680139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.871 [2024-07-24 19:00:53.680154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x262b6f0 with addr=10.0.0.2, port=4420 00:24:08.871 [2024-07-24 19:00:53.680168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262b6f0 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.680360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.871 [2024-07-24 19:00:53.680374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2602d90 with addr=10.0.0.2, port=4420 00:24:08.871 [2024-07-24 19:00:53.680384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2602d90 is same with the state(6) to be set 00:24:08.871 [2024-07-24 19:00:53.680396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a4bb0 (9): Bad file descriptor 00:24:08.871 [2024-07-24 19:00:53.680407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:08.871 [2024-07-24 19:00:53.680416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:08.871 [2024-07-24 19:00:53.680425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:08.871 [2024-07-24 19:00:53.680441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.871 [2024-07-24 19:00:53.680450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.871 [2024-07-24 19:00:53.680460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.871 [2024-07-24 19:00:53.680473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:08.871 [2024-07-24 19:00:53.680481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:08.871 [2024-07-24 19:00:53.680490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:08.871 [2024-07-24 19:00:53.680558] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:08.871 [2024-07-24 19:00:53.680690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.680703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.680711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.680722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269c660 (9): Bad file descriptor 00:24:08.872 [2024-07-24 19:00:53.680735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x262b6f0 (9): Bad file descriptor 00:24:08.872 [2024-07-24 19:00:53.680748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2602d90 (9): Bad file descriptor 00:24:08.872 [2024-07-24 19:00:53.680759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:08.872 [2024-07-24 19:00:53.680768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:08.872 [2024-07-24 19:00:53.680777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:08.872 [2024-07-24 19:00:53.680809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.680822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.680833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.680844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.680854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.680868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.680879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.680888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.680898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d8610 is same with the state(6) to be set 00:24:08.872 [2024-07-24 19:00:53.680932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.680945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.680955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.680965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.680976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.680985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.680995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.681006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.681016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269bcd0 is same with the state(6) to be set 00:24:08.872 [2024-07-24 19:00:53.681047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.681059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.681071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.681080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.681091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.681101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.681111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.872 [2024-07-24 19:00:53.681121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.872 [2024-07-24 19:00:53.681130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2687210 is same with the state(6) to be set 00:24:08.872 [2024-07-24 19:00:53.681280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.681297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:08.872 [2024-07-24 19:00:53.681306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:08.872 [2024-07-24 19:00:53.681315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:08.872 [2024-07-24 19:00:53.681335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:08.872 [2024-07-24 19:00:53.681344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:08.872 [2024-07-24 19:00:53.681354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:08.872 [2024-07-24 19:00:53.681367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:08.872 [2024-07-24 19:00:53.681376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:08.872 [2024-07-24 19:00:53.681386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:08.872 [2024-07-24 19:00:53.681456] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:08.872 [2024-07-24 19:00:53.681528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.681540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.681549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.681816] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:08.872 [2024-07-24 19:00:53.687464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:08.872 [2024-07-24 19:00:53.687483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.872 [2024-07-24 19:00:53.687496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:08.872 [2024-07-24 19:00:53.687701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.872 [2024-07-24 19:00:53.687719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25fa050 with addr=10.0.0.2, port=4420 00:24:08.872 [2024-07-24 19:00:53.687730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa050 is same with the state(6) to be set 00:24:08.872 [2024-07-24 19:00:53.687975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.872 [2024-07-24 19:00:53.687989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d7120 with addr=10.0.0.2, port=4420 00:24:08.872 [2024-07-24 19:00:53.687999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d7120 is same with the state(6) to be set 00:24:08.872 [2024-07-24 19:00:53.688121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.872 [2024-07-24 19:00:53.688135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27a2dc0 with addr=10.0.0.2, port=4420 00:24:08.872 [2024-07-24 19:00:53.688144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a2dc0 is same with the state(6) to be set 00:24:08.872 [2024-07-24 19:00:53.688180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25fa050 (9): Bad file descriptor 00:24:08.872 [2024-07-24 19:00:53.688194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d7120 (9): Bad file descriptor 00:24:08.872 [2024-07-24 19:00:53.688206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a2dc0 (9): Bad file descriptor 00:24:08.872 [2024-07-24 19:00:53.688238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:08.872 [2024-07-24 19:00:53.688249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:08.872 [2024-07-24 19:00:53.688258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:08.872 [2024-07-24 19:00:53.688272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.872 [2024-07-24 19:00:53.688282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.872 [2024-07-24 19:00:53.688295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.872 [2024-07-24 19:00:53.688308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:08.872 [2024-07-24 19:00:53.688317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:08.872 [2024-07-24 19:00:53.688327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:08.872 [2024-07-24 19:00:53.688362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.688372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.688381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.688444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:08.872 [2024-07-24 19:00:53.688722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.872 [2024-07-24 19:00:53.688739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a4bb0 with addr=10.0.0.2, port=4420 00:24:08.872 [2024-07-24 19:00:53.688749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a4bb0 is same with the state(6) to be set 00:24:08.872 [2024-07-24 19:00:53.688782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a4bb0 (9): Bad file descriptor 00:24:08.872 [2024-07-24 19:00:53.688815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:08.872 [2024-07-24 19:00:53.688825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:08.872 [2024-07-24 19:00:53.688835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:08.872 [2024-07-24 19:00:53.688869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.872 [2024-07-24 19:00:53.689288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:08.872 [2024-07-24 19:00:53.689302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:08.872 [2024-07-24 19:00:53.689484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.872 [2024-07-24 19:00:53.689500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2602d90 with addr=10.0.0.2, port=4420 00:24:08.873 [2024-07-24 19:00:53.689510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2602d90 is same with the state(6) to be set 00:24:08.873 [2024-07-24 19:00:53.689698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.873 [2024-07-24 19:00:53.689713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x262b6f0 with addr=10.0.0.2, port=4420 00:24:08.873 [2024-07-24 19:00:53.689723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262b6f0 is same with the state(6) to be set 00:24:08.873 [2024-07-24 19:00:53.689766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2602d90 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.689781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x262b6f0 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.689825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:08.873 [2024-07-24 19:00:53.689836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:08.873 [2024-07-24 19:00:53.689845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:08.873 [2024-07-24 19:00:53.689858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:08.873 [2024-07-24 19:00:53.689872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:08.873 [2024-07-24 19:00:53.689881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:08.873 [2024-07-24 19:00:53.689916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:08.873 [2024-07-24 19:00:53.689929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.873 [2024-07-24 19:00:53.689938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.873 [2024-07-24 19:00:53.690147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.873 [2024-07-24 19:00:53.690164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x269c660 with addr=10.0.0.2, port=4420 00:24:08.873 [2024-07-24 19:00:53.690174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269c660 is same with the state(6) to be set 00:24:08.873 [2024-07-24 19:00:53.690207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269c660 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.690240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:08.873 [2024-07-24 19:00:53.690250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:08.873 [2024-07-24 19:00:53.690260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:08.873 [2024-07-24 19:00:53.690294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.873 [2024-07-24 19:00:53.690718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d8610 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.690740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269bcd0 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.690761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2687210 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.697632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:08.873 [2024-07-24 19:00:53.697656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.873 [2024-07-24 19:00:53.697668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:08.873 [2024-07-24 19:00:53.697894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.873 [2024-07-24 19:00:53.697912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27a2dc0 with addr=10.0.0.2, port=4420 00:24:08.873 [2024-07-24 19:00:53.697924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a2dc0 is same with the state(6) to be set 00:24:08.873 [2024-07-24 19:00:53.698129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.873 [2024-07-24 19:00:53.698146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d7120 with addr=10.0.0.2, port=4420 00:24:08.873 [2024-07-24 19:00:53.698156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d7120 is same with the state(6) to be set 00:24:08.873 [2024-07-24 19:00:53.698290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.873 [2024-07-24 19:00:53.698305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25fa050 with addr=10.0.0.2, port=4420 00:24:08.873 [2024-07-24 19:00:53.698316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa050 is same with the state(6) to be set 00:24:08.873 [2024-07-24 19:00:53.698350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a2dc0 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.698365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d7120 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.698384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25fa050 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.698416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:08.873 [2024-07-24 19:00:53.698426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:08.873 [2024-07-24 19:00:53.698436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:08.873 [2024-07-24 19:00:53.698450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.873 [2024-07-24 19:00:53.698459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.873 [2024-07-24 19:00:53.698469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.873 [2024-07-24 19:00:53.698482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:08.873 [2024-07-24 19:00:53.698491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:08.873 [2024-07-24 19:00:53.698501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:08.873 [2024-07-24 19:00:53.698537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.873 [2024-07-24 19:00:53.698548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.873 [2024-07-24 19:00:53.698556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.873 [2024-07-24 19:00:53.698628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:08.873 [2024-07-24 19:00:53.698788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.873 [2024-07-24 19:00:53.698806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a4bb0 with addr=10.0.0.2, port=4420 00:24:08.873 [2024-07-24 19:00:53.698816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a4bb0 is same with the state(6) to be set 00:24:08.873 [2024-07-24 19:00:53.698850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a4bb0 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.698884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:08.873 [2024-07-24 19:00:53.698895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:08.873 [2024-07-24 19:00:53.698905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:08.873 [2024-07-24 19:00:53.698940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.873 [2024-07-24 19:00:53.699421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:08.873 [2024-07-24 19:00:53.699438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:08.873 [2024-07-24 19:00:53.699653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.873 [2024-07-24 19:00:53.699671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x262b6f0 with addr=10.0.0.2, port=4420 00:24:08.873 [2024-07-24 19:00:53.699682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262b6f0 is same with the state(6) to be set 00:24:08.873 [2024-07-24 19:00:53.699863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.873 [2024-07-24 19:00:53.699879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2602d90 with addr=10.0.0.2, port=4420 00:24:08.873 [2024-07-24 19:00:53.699890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2602d90 is same with the state(6) to be set 00:24:08.873 [2024-07-24 19:00:53.699924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x262b6f0 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.699942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2602d90 (9): Bad file descriptor 00:24:08.873 [2024-07-24 19:00:53.699976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:08.873 [2024-07-24 19:00:53.699987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:08.873 [2024-07-24 19:00:53.699997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:08.873 [2024-07-24 19:00:53.700011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:08.873 [2024-07-24 19:00:53.700020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:08.874 [2024-07-24 19:00:53.700029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:08.874 [2024-07-24 19:00:53.700071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.874 [2024-07-24 19:00:53.700082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.874 [2024-07-24 19:00:53.700116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:08.874 [2024-07-24 19:00:53.700283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.874 [2024-07-24 19:00:53.700301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x269c660 with addr=10.0.0.2, port=4420 00:24:08.874 [2024-07-24 19:00:53.700312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269c660 is same with the state(6) to be set 00:24:08.874 [2024-07-24 19:00:53.700346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269c660 (9): Bad file descriptor 00:24:08.874 [2024-07-24 19:00:53.700380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:08.874 [2024-07-24 19:00:53.700390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:08.874 [2024-07-24 19:00:53.700400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:08.874 [2024-07-24 19:00:53.700434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.874 [2024-07-24 19:00:53.700836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.700855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.700874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.700885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.700898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.700909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.700922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.700932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.700945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.700956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.700973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.700984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.700997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.874 [2024-07-24 19:00:53.701640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.874 [2024-07-24 19:00:53.701654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.701983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.701996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.702005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.702017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.702027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.702040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.702050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.702062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.702073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.702085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.702095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.702107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.702117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.702132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.702143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.702155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.702165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.702176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d2920 is same with the state(6) to be set 00:24:08.875 [2024-07-24 19:00:53.703629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.703989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.703999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.704012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.704021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.875 [2024-07-24 19:00:53.704034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.875 [2024-07-24 19:00:53.704044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.876 [2024-07-24 19:00:53.704954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.876 [2024-07-24 19:00:53.704967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.704977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.704989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.704999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.705012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.705024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.705036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.705047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.705059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.705070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.705082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.705093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.705103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x30abd30 is same with the state(6) to be set 00:24:08.877 [2024-07-24 19:00:53.706549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.706978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.706988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.877 [2024-07-24 19:00:53.707327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.877 [2024-07-24 19:00:53.707341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.707983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.707996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.878 [2024-07-24 19:00:53.708005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.878 [2024-07-24 19:00:53.708016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ab20 is same with the state(6) to be set 00:24:08.878 [2024-07-24 19:00:53.709763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:08.878 [2024-07-24 19:00:53.709792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:08.878 task offset: 23296 on job bdev=Nvme10n1 fails 00:24:08.878 00:24:08.878 Latency(us) 00:24:08.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.878 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.878 Job: Nvme1n1 ended in about 1.09 seconds with error 00:24:08.878 Verification LBA range: start 0x0 length 0x400 00:24:08.878 Nvme1n1 : 1.09 116.95 7.31 58.47 0.00 360542.95 31218.97 327918.31 00:24:08.878 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.879 Job: Nvme2n1 ended in about 1.08 seconds with error 00:24:08.879 Verification LBA range: start 0x0 length 0x400 00:24:08.879 Nvme2n1 : 1.08 118.28 7.39 59.14 0.00 348503.04 10366.60 280255.77 00:24:08.879 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.879 Job: Nvme3n1 ended in about 1.10 seconds with error 00:24:08.879 Verification LBA range: start 0x0 length 0x400 00:24:08.879 Nvme3n1 : 1.10 174.24 10.89 8.17 0.00 325215.06 18588.39 335544.32 00:24:08.879 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.879 Job: Nvme4n1 ended in about 1.12 seconds with error 00:24:08.879 Verification LBA range: start 0x0 length 0x400 00:24:08.879 Nvme4n1 : 1.12 171.14 10.70 57.05 0.00 259393.16 23592.96 266910.25 00:24:08.879 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.879 Job: Nvme5n1 ended in about 1.12 seconds with error 00:24:08.879 Verification LBA range: start 0x0 length 0x400 00:24:08.879 Nvme5n1 : 1.12 113.95 7.12 56.98 0.00 338693.28 39798.23 268816.76 00:24:08.879 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.879 Job: Nvme6n1 ended in about 1.15 seconds with error 00:24:08.879 Verification LBA range: start 0x0 length 0x400 00:24:08.879 Nvme6n1 : 1.15 117.38 7.34 49.56 0.00 338834.77 51952.17 287881.77 00:24:08.879 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.879 Verification LBA range: start 0x0 length 0x400 00:24:08.879 Nvme7n1 : 1.12 199.69 12.48 0.00 0.00 270816.63 8221.79 231639.97 00:24:08.879 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.879 Job: Nvme8n1 ended in about 1.15 seconds with error 00:24:08.879 Verification LBA range: start 0x0 length 0x400 00:24:08.879 Nvme8n1 : 1.15 166.52 10.41 55.51 0.00 243632.41 26095.24 316479.30 00:24:08.879 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.879 Job: Nvme9n1 ended in about 1.16 seconds with error 00:24:08.879 Verification LBA range: start 0x0 length 0x400 00:24:08.879 Nvme9n1 : 1.16 110.73 6.92 55.37 0.00 318066.81 49330.73 322198.81 00:24:08.879 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.879 Job: Nvme10n1 ended in about 1.08 seconds with error 00:24:08.879 Verification LBA range: start 0x0 length 0x400 00:24:08.879 Nvme10n1 : 1.08 119.04 7.44 59.52 0.00 283376.87 3872.58 333637.82 00:24:08.879 =================================================================================================================== 00:24:08.879 Total : 1407.91 87.99 459.76 0.00 304709.91 3872.58 335544.32 00:24:08.879 [2024-07-24 19:00:53.738966] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:08.879 [2024-07-24 19:00:53.739015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:08.879 [2024-07-24 19:00:53.739450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-07-24 19:00:53.739479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d8610 with addr=10.0.0.2, port=4420 00:24:08.879 [2024-07-24 19:00:53.739494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d8610 is same with the state(6) to be set 00:24:08.879 [2024-07-24 19:00:53.739687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-07-24 19:00:53.739705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2687210 with addr=10.0.0.2, port=4420 00:24:08.879 [2024-07-24 19:00:53.739716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2687210 is same with the state(6) to be set 00:24:08.879 [2024-07-24 19:00:53.739904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-07-24 19:00:53.739920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x269bcd0 with addr=10.0.0.2, port=4420 00:24:08.879 [2024-07-24 19:00:53.739930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269bcd0 is same with the state(6) to be set 00:24:08.879 [2024-07-24 19:00:53.739968] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:08.879 [2024-07-24 19:00:53.739984] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:08.879 [2024-07-24 19:00:53.739998] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:08.879 [2024-07-24 19:00:53.740012] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:08.879 [2024-07-24 19:00:53.740024] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:08.879 [2024-07-24 19:00:53.740038] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:08.879 [2024-07-24 19:00:53.741024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:08.879 [2024-07-24 19:00:53.741044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.879 [2024-07-24 19:00:53.741062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:08.879 [2024-07-24 19:00:53.741074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:08.879 [2024-07-24 19:00:53.741085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:08.879 [2024-07-24 19:00:53.741098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:08.879 [2024-07-24 19:00:53.741179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d8610 (9): Bad file descriptor 00:24:08.879 [2024-07-24 19:00:53.741196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2687210 (9): Bad file descriptor 00:24:08.879 [2024-07-24 19:00:53.741210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269bcd0 (9): Bad file descriptor 00:24:08.879 [2024-07-24 19:00:53.741563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:08.879 [2024-07-24 19:00:53.741756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-07-24 19:00:53.741777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25fa050 with addr=10.0.0.2, port=4420 00:24:08.879 [2024-07-24 19:00:53.741788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa050 is same with the state(6) to be set 00:24:08.879 [2024-07-24 19:00:53.741982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-07-24 19:00:53.741997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d7120 with addr=10.0.0.2, port=4420 00:24:08.879 [2024-07-24 19:00:53.742009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d7120 is same with the state(6) to be set 00:24:08.879 [2024-07-24 19:00:53.742282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-07-24 19:00:53.742297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27a2dc0 with addr=10.0.0.2, port=4420 00:24:08.879 [2024-07-24 19:00:53.742308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a2dc0 is same with the state(6) to be set 00:24:08.879 [2024-07-24 19:00:53.742432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-07-24 19:00:53.742447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a4bb0 with addr=10.0.0.2, port=4420 00:24:08.879 [2024-07-24 19:00:53.742458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a4bb0 is same with the state(6) to be set 00:24:08.879 [2024-07-24 19:00:53.742647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-07-24 19:00:53.742663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2602d90 with addr=10.0.0.2, port=4420 00:24:08.879 [2024-07-24 19:00:53.742674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2602d90 is same with the state(6) to be set 00:24:08.879 [2024-07-24 19:00:53.742862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-07-24 19:00:53.742878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x262b6f0 with addr=10.0.0.2, port=4420 00:24:08.879 [2024-07-24 19:00:53.742888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262b6f0 is same with the state(6) to be set 00:24:08.879 [2024-07-24 19:00:53.742899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:08.879 [2024-07-24 19:00:53.742909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:08.879 [2024-07-24 19:00:53.742920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:08.879 [2024-07-24 19:00:53.742935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:08.879 [2024-07-24 19:00:53.742949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:08.879 [2024-07-24 19:00:53.742959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:08.879 [2024-07-24 19:00:53.742971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:08.879 [2024-07-24 19:00:53.742980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:08.879 [2024-07-24 19:00:53.742990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:08.879 [2024-07-24 19:00:53.743056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.879 [2024-07-24 19:00:53.743068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.879 [2024-07-24 19:00:53.743077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.879 [2024-07-24 19:00:53.743210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-07-24 19:00:53.743225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x269c660 with addr=10.0.0.2, port=4420 00:24:08.879 [2024-07-24 19:00:53.743236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269c660 is same with the state(6) to be set 00:24:08.879 [2024-07-24 19:00:53.743249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25fa050 (9): Bad file descriptor 00:24:08.879 [2024-07-24 19:00:53.743263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d7120 (9): Bad file descriptor 00:24:08.879 [2024-07-24 19:00:53.743276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27a2dc0 (9): Bad file descriptor 00:24:08.879 [2024-07-24 19:00:53.743288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a4bb0 (9): Bad file descriptor 00:24:08.879 [2024-07-24 19:00:53.743300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2602d90 (9): Bad file descriptor 00:24:08.879 [2024-07-24 19:00:53.743313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x262b6f0 (9): Bad file descriptor 00:24:08.879 [2024-07-24 19:00:53.743351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269c660 (9): Bad file descriptor 00:24:08.880 [2024-07-24 19:00:53.743364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:08.880 [2024-07-24 19:00:53.743374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:08.880 [2024-07-24 19:00:53.743383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:08.880 [2024-07-24 19:00:53.743396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.880 [2024-07-24 19:00:53.743405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.880 [2024-07-24 19:00:53.743415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.880 [2024-07-24 19:00:53.743427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:08.880 [2024-07-24 19:00:53.743437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:08.880 [2024-07-24 19:00:53.743447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:08.880 [2024-07-24 19:00:53.743461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:08.880 [2024-07-24 19:00:53.743470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:08.880 [2024-07-24 19:00:53.743480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:08.880 [2024-07-24 19:00:53.743496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:08.880 [2024-07-24 19:00:53.743505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:08.880 [2024-07-24 19:00:53.743515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:08.880 [2024-07-24 19:00:53.743527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:08.880 [2024-07-24 19:00:53.743536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:08.880 [2024-07-24 19:00:53.743546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:08.880 [2024-07-24 19:00:53.743592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.880 [2024-07-24 19:00:53.743612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.880 [2024-07-24 19:00:53.743622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.880 [2024-07-24 19:00:53.743630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.880 [2024-07-24 19:00:53.743639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.880 [2024-07-24 19:00:53.743647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.880 [2024-07-24 19:00:53.743656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:08.880 [2024-07-24 19:00:53.743665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:08.880 [2024-07-24 19:00:53.743674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:08.880 [2024-07-24 19:00:53.743710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.446 19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:09.446 19:00:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2588494 00:24:10.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2588494) - No such process 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:10.383 rmmod nvme_tcp 00:24:10.383 rmmod nvme_fabrics 00:24:10.383 rmmod nvme_keyring 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:10.383 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:10.384 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:10.384 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:10.384 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.384 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.384 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.384 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.384 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.384 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.384 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.918 19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:12.918 00:24:12.918 real 0m8.648s 00:24:12.918 user 0m22.663s 00:24:12.918 sys 0m1.568s 00:24:12.918 19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.918 19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.918 ************************************ 00:24:12.918 END TEST nvmf_shutdown_tc3 00:24:12.918 ************************************ 00:24:12.918 19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:12.918 00:24:12.918 real 0m33.666s 00:24:12.918 user 1m26.565s 00:24:12.918 sys 0m9.173s 00:24:12.918 19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.918 19:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:12.918 ************************************ 00:24:12.918 END TEST nvmf_shutdown 00:24:12.918 ************************************ 00:24:12.918 19:00:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:24:12.918 00:24:12.918 real 12m31.160s 00:24:12.918 user 27m28.362s 00:24:12.918 sys 3m22.678s 00:24:12.918 19:00:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.918 19:00:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:12.918 ************************************ 00:24:12.918 END TEST nvmf_target_extra 00:24:12.918 ************************************ 00:24:12.918 19:00:57 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:12.918 19:00:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.918 19:00:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.918 19:00:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.918 ************************************ 00:24:12.918 START TEST nvmf_host 00:24:12.918 ************************************ 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:12.918 * Looking for test storage... 00:24:12.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.918 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.919 ************************************ 00:24:12.919 START TEST nvmf_multicontroller 00:24:12.919 ************************************ 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:12.919 * Looking for test storage... 00:24:12.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.919 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.920 19:00:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.203 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.466 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:18.466 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:18.466 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:18.466 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:18.466 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:18.466 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:18.466 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:18.467 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:18.467 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:18.467 Found net devices under 0000:af:00.0: cvl_0_0 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:18.467 Found net devices under 0000:af:00.1: cvl_0_1 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.467 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:18.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:18.467 00:24:18.467 --- 10.0.0.2 ping statistics --- 00:24:18.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.467 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:24:18.725 00:24:18.725 --- 10.0.0.1 ping statistics --- 00:24:18.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.725 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2592897 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2592897 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2592897 ']' 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.725 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:18.726 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.726 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.726 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.726 19:01:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.726 [2024-07-24 19:01:03.575800] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:24:18.726 [2024-07-24 19:01:03.575854] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.726 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.726 [2024-07-24 19:01:03.662248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:18.984 [2024-07-24 19:01:03.767199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.984 [2024-07-24 19:01:03.767248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.984 [2024-07-24 19:01:03.767262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.984 [2024-07-24 19:01:03.767274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.984 [2024-07-24 19:01:03.767285] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.984 [2024-07-24 19:01:03.767407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.984 [2024-07-24 19:01:03.767887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.984 [2024-07-24 19:01:03.767891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.581 [2024-07-24 19:01:04.496769] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.581 Malloc0 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.581 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.844 [2024-07-24 19:01:04.578422] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.844 [2024-07-24 19:01:04.586360] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.844 Malloc1 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.844 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2593174 00:24:19.845 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.845 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:19.845 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2593174 /var/tmp/bdevperf.sock 00:24:19.845 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2593174 ']' 00:24:19.845 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.845 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.845 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.845 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.845 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.104 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.104 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:20.104 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:20.104 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.104 19:01:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.104 NVMe0n1 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.104 1 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.104 request: 00:24:20.104 { 00:24:20.104 "name": "NVMe0", 00:24:20.104 "trtype": "tcp", 00:24:20.104 "traddr": "10.0.0.2", 00:24:20.104 "adrfam": "ipv4", 00:24:20.104 "trsvcid": "4420", 00:24:20.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.104 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:20.104 "hostaddr": "10.0.0.2", 00:24:20.104 "hostsvcid": "60000", 00:24:20.104 "prchk_reftag": false, 00:24:20.104 "prchk_guard": false, 00:24:20.104 "hdgst": false, 00:24:20.104 "ddgst": false, 00:24:20.104 "method": "bdev_nvme_attach_controller", 00:24:20.104 "req_id": 1 00:24:20.104 } 00:24:20.104 Got JSON-RPC error response 00:24:20.104 response: 00:24:20.104 { 00:24:20.104 "code": -114, 00:24:20.104 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:20.104 } 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.104 request: 00:24:20.104 { 00:24:20.104 "name": "NVMe0", 00:24:20.104 "trtype": "tcp", 00:24:20.104 "traddr": "10.0.0.2", 00:24:20.104 "adrfam": "ipv4", 00:24:20.104 "trsvcid": "4420", 00:24:20.104 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:20.104 "hostaddr": "10.0.0.2", 00:24:20.104 "hostsvcid": "60000", 00:24:20.104 "prchk_reftag": false, 00:24:20.104 "prchk_guard": false, 00:24:20.104 "hdgst": false, 00:24:20.104 "ddgst": false, 00:24:20.104 "method": "bdev_nvme_attach_controller", 00:24:20.104 "req_id": 1 00:24:20.104 } 00:24:20.104 Got JSON-RPC error response 00:24:20.104 response: 00:24:20.104 { 00:24:20.104 "code": -114, 00:24:20.104 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:20.104 } 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:20.104 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:20.105 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:20.105 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:20.105 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:20.105 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:20.105 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:20.105 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:20.105 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.364 request: 00:24:20.364 { 00:24:20.364 "name": "NVMe0", 00:24:20.364 "trtype": "tcp", 00:24:20.364 "traddr": "10.0.0.2", 00:24:20.364 "adrfam": "ipv4", 00:24:20.364 "trsvcid": "4420", 00:24:20.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.364 "hostaddr": "10.0.0.2", 00:24:20.364 "hostsvcid": "60000", 00:24:20.364 "prchk_reftag": false, 00:24:20.364 "prchk_guard": false, 00:24:20.364 "hdgst": false, 00:24:20.364 "ddgst": false, 00:24:20.364 "multipath": "disable", 00:24:20.364 "method": "bdev_nvme_attach_controller", 00:24:20.364 "req_id": 1 00:24:20.364 } 00:24:20.364 Got JSON-RPC error response 00:24:20.364 response: 00:24:20.364 { 00:24:20.364 "code": -114, 00:24:20.364 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:20.364 } 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.364 request: 00:24:20.364 { 00:24:20.364 "name": "NVMe0", 00:24:20.364 "trtype": "tcp", 00:24:20.364 "traddr": "10.0.0.2", 00:24:20.364 "adrfam": "ipv4", 00:24:20.364 "trsvcid": "4420", 00:24:20.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.364 "hostaddr": "10.0.0.2", 00:24:20.364 "hostsvcid": "60000", 00:24:20.364 "prchk_reftag": false, 00:24:20.364 "prchk_guard": false, 00:24:20.364 "hdgst": false, 00:24:20.364 "ddgst": false, 00:24:20.364 "multipath": "failover", 00:24:20.364 "method": "bdev_nvme_attach_controller", 00:24:20.364 "req_id": 1 00:24:20.364 } 00:24:20.364 Got JSON-RPC error response 00:24:20.364 response: 00:24:20.364 { 00:24:20.364 "code": -114, 00:24:20.364 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:20.364 } 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.364 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.624 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.624 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:20.624 19:01:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:21.999 0 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2593174 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2593174 ']' 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2593174 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2593174 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2593174' 00:24:21.999 killing process with pid 2593174 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2593174 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2593174 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:21.999 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # sort -u 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # cat 00:24:22.000 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:22.000 [2024-07-24 19:01:04.706460] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:24:22.000 [2024-07-24 19:01:04.706532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593174 ] 00:24:22.000 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.000 [2024-07-24 19:01:04.788797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.000 [2024-07-24 19:01:04.876261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.000 [2024-07-24 19:01:05.502762] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 850b6967-c8d2-4765-8712-693285b7d7ff already exists 00:24:22.000 [2024-07-24 19:01:05.502798] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:850b6967-c8d2-4765-8712-693285b7d7ff alias for bdev NVMe1n1 00:24:22.000 [2024-07-24 19:01:05.502810] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:22.000 Running I/O for 1 seconds... 00:24:22.000 00:24:22.000 Latency(us) 00:24:22.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.000 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:22.000 NVMe0n1 : 1.01 7729.20 30.19 0.00 0.00 16470.56 8340.95 29669.93 00:24:22.000 =================================================================================================================== 00:24:22.000 Total : 7729.20 30.19 0.00 0.00 16470.56 8340.95 29669.93 00:24:22.000 Received shutdown signal, test time was about 1.000000 seconds 00:24:22.000 00:24:22.000 Latency(us) 00:24:22.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.000 =================================================================================================================== 00:24:22.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.000 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1616 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.000 19:01:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.000 rmmod nvme_tcp 00:24:22.000 rmmod nvme_fabrics 00:24:22.259 rmmod nvme_keyring 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2592897 ']' 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2592897 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2592897 ']' 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2592897 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2592897 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2592897' 00:24:22.259 killing process with pid 2592897 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2592897 00:24:22.259 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2592897 00:24:22.518 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:22.518 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:22.518 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:22.518 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.518 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:22.518 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.518 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.518 19:01:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.052 00:24:25.052 real 0m11.903s 00:24:25.052 user 0m15.044s 00:24:25.052 sys 0m5.155s 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.052 ************************************ 00:24:25.052 END TEST nvmf_multicontroller 00:24:25.052 ************************************ 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.052 ************************************ 00:24:25.052 START TEST nvmf_aer 00:24:25.052 ************************************ 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:25.052 * Looking for test storage... 00:24:25.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.052 19:01:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:31.615 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:31.615 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:31.615 Found net devices under 0000:af:00.0: cvl_0_0 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:31.615 Found net devices under 0000:af:00.1: cvl_0_1 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.615 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:24:31.616 00:24:31.616 --- 10.0.0.2 ping statistics --- 00:24:31.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.616 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:24:31.616 00:24:31.616 --- 10.0.0.1 ping statistics --- 00:24:31.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.616 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2597199 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2597199 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2597199 ']' 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.616 19:01:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.616 [2024-07-24 19:01:15.718615] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:24:31.616 [2024-07-24 19:01:15.718671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.616 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.616 [2024-07-24 19:01:15.806262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.616 [2024-07-24 19:01:15.898052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.616 [2024-07-24 19:01:15.898096] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.616 [2024-07-24 19:01:15.898106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.616 [2024-07-24 19:01:15.898115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.616 [2024-07-24 19:01:15.898123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.616 [2024-07-24 19:01:15.898175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.616 [2024-07-24 19:01:15.898287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.616 [2024-07-24 19:01:15.898397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.616 [2024-07-24 19:01:15.898398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.874 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.875 [2024-07-24 19:01:16.710858] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.875 Malloc0 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.875 [2024-07-24 19:01:16.762608] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.875 [ 00:24:31.875 { 00:24:31.875 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:31.875 "subtype": "Discovery", 00:24:31.875 "listen_addresses": [], 00:24:31.875 "allow_any_host": true, 00:24:31.875 "hosts": [] 00:24:31.875 }, 00:24:31.875 { 00:24:31.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.875 "subtype": "NVMe", 00:24:31.875 "listen_addresses": [ 00:24:31.875 { 00:24:31.875 "trtype": "TCP", 00:24:31.875 "adrfam": "IPv4", 00:24:31.875 "traddr": "10.0.0.2", 00:24:31.875 "trsvcid": "4420" 00:24:31.875 } 00:24:31.875 ], 00:24:31.875 "allow_any_host": true, 00:24:31.875 "hosts": [], 00:24:31.875 "serial_number": "SPDK00000000000001", 00:24:31.875 "model_number": "SPDK bdev Controller", 00:24:31.875 "max_namespaces": 2, 00:24:31.875 "min_cntlid": 1, 00:24:31.875 "max_cntlid": 65519, 00:24:31.875 "namespaces": [ 00:24:31.875 { 00:24:31.875 "nsid": 1, 00:24:31.875 "bdev_name": "Malloc0", 00:24:31.875 "name": "Malloc0", 00:24:31.875 "nguid": "48A8020DCCB74EECA5FBA87E168F875F", 00:24:31.875 "uuid": "48a8020d-ccb7-4eec-a5fb-a87e168f875f" 00:24:31.875 } 00:24:31.875 ] 00:24:31.875 } 00:24:31.875 ] 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2597477 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1263 -- # local i=0 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=1 00:24:31.875 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:24:31.875 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.134 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.134 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:24:32.134 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=2 00:24:32.134 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:24:32.134 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.134 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.134 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # return 0 00:24:32.134 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:32.134 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.134 19:01:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.134 Malloc1 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.134 [ 00:24:32.134 { 00:24:32.134 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:32.134 "subtype": "Discovery", 00:24:32.134 "listen_addresses": [], 00:24:32.134 "allow_any_host": true, 00:24:32.134 "hosts": [] 00:24:32.134 }, 00:24:32.134 { 00:24:32.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.134 "subtype": "NVMe", 00:24:32.134 "listen_addresses": [ 00:24:32.134 { 00:24:32.134 "trtype": "TCP", 00:24:32.134 "adrfam": "IPv4", 00:24:32.134 "traddr": "10.0.0.2", 00:24:32.134 "trsvcid": "4420" 00:24:32.134 } 00:24:32.134 ], 00:24:32.134 "allow_any_host": true, 00:24:32.134 "hosts": [], 00:24:32.134 "serial_number": "SPDK00000000000001", 00:24:32.134 "model_number": "SPDK bdev Controller", 00:24:32.134 "max_namespaces": 2, 00:24:32.134 "min_cntlid": 1, 00:24:32.134 "max_cntlid": 65519, 00:24:32.134 "namespaces": [ 00:24:32.134 { 00:24:32.134 "nsid": 1, 00:24:32.134 "bdev_name": "Malloc0", 00:24:32.134 "name": "Malloc0", 00:24:32.134 "nguid": "48A8020DCCB74EECA5FBA87E168F875F", 00:24:32.134 "uuid": "48a8020d-ccb7-4eec-a5fb-a87e168f875f" 00:24:32.134 }, 00:24:32.134 { 00:24:32.134 "nsid": 2, 00:24:32.134 "bdev_name": "Malloc1", 00:24:32.134 "name": "Malloc1", 00:24:32.134 "nguid": "868FC41FFF454386928390981527D858", 00:24:32.134 "uuid": "868fc41f-ff45-4386-9283-90981527d858" 00:24:32.134 } 00:24:32.134 ] 00:24:32.134 } 00:24:32.134 ] 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2597477 00:24:32.134 Asynchronous Event Request test 00:24:32.134 Attaching to 10.0.0.2 00:24:32.134 Attached to 10.0.0.2 00:24:32.134 Registering asynchronous event callbacks... 00:24:32.134 Starting namespace attribute notice tests for all controllers... 00:24:32.134 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:32.134 aer_cb - Changed Namespace 00:24:32.134 Cleaning up... 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:32.134 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:32.134 rmmod nvme_tcp 00:24:32.393 rmmod nvme_fabrics 00:24:32.393 rmmod nvme_keyring 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2597199 ']' 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2597199 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2597199 ']' 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2597199 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2597199 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2597199' 00:24:32.393 killing process with pid 2597199 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2597199 00:24:32.393 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2597199 00:24:32.653 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:32.653 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:32.653 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:32.653 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.653 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:32.653 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.653 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.653 19:01:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.565 19:01:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:34.565 00:24:34.565 real 0m9.897s 00:24:34.565 user 0m7.914s 00:24:34.565 sys 0m4.920s 00:24:34.565 19:01:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:34.565 19:01:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:34.565 ************************************ 00:24:34.565 END TEST nvmf_aer 00:24:34.565 ************************************ 00:24:34.565 19:01:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:34.565 19:01:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:34.565 19:01:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.565 19:01:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.824 ************************************ 00:24:34.824 START TEST nvmf_async_init 00:24:34.824 ************************************ 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:34.824 * Looking for test storage... 00:24:34.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.824 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=31fc88c8604643d8852f3d01e0b9a994 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:34.825 19:01:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:41.415 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:41.415 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.415 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:41.416 Found net devices under 0000:af:00.0: cvl_0_0 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:41.416 Found net devices under 0000:af:00.1: cvl_0_1 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:41.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:24:41.416 00:24:41.416 --- 10.0.0.2 ping statistics --- 00:24:41.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.416 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:24:41.416 00:24:41.416 --- 10.0.0.1 ping statistics --- 00:24:41.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.416 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2601146 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2601146 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2601146 ']' 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 [2024-07-24 19:01:25.642572] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:24:41.416 [2024-07-24 19:01:25.642635] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.416 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.416 [2024-07-24 19:01:25.729971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.416 [2024-07-24 19:01:25.818541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.416 [2024-07-24 19:01:25.818585] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.416 [2024-07-24 19:01:25.818596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.416 [2024-07-24 19:01:25.818611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.416 [2024-07-24 19:01:25.818619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.416 [2024-07-24 19:01:25.818641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 [2024-07-24 19:01:25.961864] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 null0 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.416 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.417 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.417 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 31fc88c8604643d8852f3d01e0b9a994 00:24:41.417 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.417 19:01:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.417 [2024-07-24 19:01:26.006118] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.417 nvme0n1 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.417 [ 00:24:41.417 { 00:24:41.417 "name": "nvme0n1", 00:24:41.417 "aliases": [ 00:24:41.417 "31fc88c8-6046-43d8-852f-3d01e0b9a994" 00:24:41.417 ], 00:24:41.417 "product_name": "NVMe disk", 00:24:41.417 "block_size": 512, 00:24:41.417 "num_blocks": 2097152, 00:24:41.417 "uuid": "31fc88c8-6046-43d8-852f-3d01e0b9a994", 00:24:41.417 "assigned_rate_limits": { 00:24:41.417 "rw_ios_per_sec": 0, 00:24:41.417 "rw_mbytes_per_sec": 0, 00:24:41.417 "r_mbytes_per_sec": 0, 00:24:41.417 "w_mbytes_per_sec": 0 00:24:41.417 }, 00:24:41.417 "claimed": false, 00:24:41.417 "zoned": false, 00:24:41.417 "supported_io_types": { 00:24:41.417 "read": true, 00:24:41.417 "write": true, 00:24:41.417 "unmap": false, 00:24:41.417 "flush": true, 00:24:41.417 "reset": true, 00:24:41.417 "nvme_admin": true, 00:24:41.417 "nvme_io": true, 00:24:41.417 "nvme_io_md": false, 00:24:41.417 "write_zeroes": true, 00:24:41.417 "zcopy": false, 00:24:41.417 "get_zone_info": false, 00:24:41.417 "zone_management": false, 00:24:41.417 "zone_append": false, 00:24:41.417 "compare": true, 00:24:41.417 "compare_and_write": true, 00:24:41.417 "abort": true, 00:24:41.417 "seek_hole": false, 00:24:41.417 "seek_data": false, 00:24:41.417 "copy": true, 00:24:41.417 "nvme_iov_md": false 00:24:41.417 }, 00:24:41.417 "memory_domains": [ 00:24:41.417 { 00:24:41.417 "dma_device_id": "system", 00:24:41.417 "dma_device_type": 1 00:24:41.417 } 00:24:41.417 ], 00:24:41.417 "driver_specific": { 00:24:41.417 "nvme": [ 00:24:41.417 { 00:24:41.417 "trid": { 00:24:41.417 "trtype": "TCP", 00:24:41.417 "adrfam": "IPv4", 00:24:41.417 "traddr": "10.0.0.2", 00:24:41.417 "trsvcid": "4420", 00:24:41.417 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:41.417 }, 00:24:41.417 "ctrlr_data": { 00:24:41.417 "cntlid": 1, 00:24:41.417 "vendor_id": "0x8086", 00:24:41.417 "model_number": "SPDK bdev Controller", 00:24:41.417 "serial_number": "00000000000000000000", 00:24:41.417 "firmware_revision": "24.09", 00:24:41.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.417 "oacs": { 00:24:41.417 "security": 0, 00:24:41.417 "format": 0, 00:24:41.417 "firmware": 0, 00:24:41.417 "ns_manage": 0 00:24:41.417 }, 00:24:41.417 "multi_ctrlr": true, 00:24:41.417 "ana_reporting": false 00:24:41.417 }, 00:24:41.417 "vs": { 00:24:41.417 "nvme_version": "1.3" 00:24:41.417 }, 00:24:41.417 "ns_data": { 00:24:41.417 "id": 1, 00:24:41.417 "can_share": true 00:24:41.417 } 00:24:41.417 } 00:24:41.417 ], 00:24:41.417 "mp_policy": "active_passive" 00:24:41.417 } 00:24:41.417 } 00:24:41.417 ] 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.417 [2024-07-24 19:01:26.268329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:41.417 [2024-07-24 19:01:26.268401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4bb00 (9): Bad file descriptor 00:24:41.417 [2024-07-24 19:01:26.402723] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.417 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.417 [ 00:24:41.417 { 00:24:41.417 "name": "nvme0n1", 00:24:41.417 "aliases": [ 00:24:41.417 "31fc88c8-6046-43d8-852f-3d01e0b9a994" 00:24:41.417 ], 00:24:41.417 "product_name": "NVMe disk", 00:24:41.417 "block_size": 512, 00:24:41.417 "num_blocks": 2097152, 00:24:41.417 "uuid": "31fc88c8-6046-43d8-852f-3d01e0b9a994", 00:24:41.417 "assigned_rate_limits": { 00:24:41.417 "rw_ios_per_sec": 0, 00:24:41.417 "rw_mbytes_per_sec": 0, 00:24:41.417 "r_mbytes_per_sec": 0, 00:24:41.417 "w_mbytes_per_sec": 0 00:24:41.417 }, 00:24:41.417 "claimed": false, 00:24:41.417 "zoned": false, 00:24:41.417 "supported_io_types": { 00:24:41.417 "read": true, 00:24:41.417 "write": true, 00:24:41.417 "unmap": false, 00:24:41.417 "flush": true, 00:24:41.417 "reset": true, 00:24:41.417 "nvme_admin": true, 00:24:41.417 "nvme_io": true, 00:24:41.417 "nvme_io_md": false, 00:24:41.417 "write_zeroes": true, 00:24:41.417 "zcopy": false, 00:24:41.417 "get_zone_info": false, 00:24:41.417 "zone_management": false, 00:24:41.417 "zone_append": false, 00:24:41.417 "compare": true, 00:24:41.684 "compare_and_write": true, 00:24:41.684 "abort": true, 00:24:41.684 "seek_hole": false, 00:24:41.684 "seek_data": false, 00:24:41.684 "copy": true, 00:24:41.684 "nvme_iov_md": false 00:24:41.684 }, 00:24:41.684 "memory_domains": [ 00:24:41.684 { 00:24:41.684 "dma_device_id": "system", 00:24:41.684 "dma_device_type": 1 00:24:41.684 } 00:24:41.684 ], 00:24:41.684 "driver_specific": { 00:24:41.684 "nvme": [ 00:24:41.684 { 00:24:41.684 "trid": { 00:24:41.684 "trtype": "TCP", 00:24:41.684 "adrfam": "IPv4", 00:24:41.684 "traddr": "10.0.0.2", 00:24:41.684 "trsvcid": "4420", 00:24:41.684 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:41.684 }, 00:24:41.684 "ctrlr_data": { 00:24:41.684 "cntlid": 2, 00:24:41.684 "vendor_id": "0x8086", 00:24:41.684 "model_number": "SPDK bdev Controller", 00:24:41.684 "serial_number": "00000000000000000000", 00:24:41.684 "firmware_revision": "24.09", 00:24:41.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.684 "oacs": { 00:24:41.684 "security": 0, 00:24:41.684 "format": 0, 00:24:41.684 "firmware": 0, 00:24:41.684 "ns_manage": 0 00:24:41.684 }, 00:24:41.684 "multi_ctrlr": true, 00:24:41.684 "ana_reporting": false 00:24:41.684 }, 00:24:41.684 "vs": { 00:24:41.684 "nvme_version": "1.3" 00:24:41.684 }, 00:24:41.684 "ns_data": { 00:24:41.684 "id": 1, 00:24:41.684 "can_share": true 00:24:41.684 } 00:24:41.684 } 00:24:41.684 ], 00:24:41.684 "mp_policy": "active_passive" 00:24:41.684 } 00:24:41.684 } 00:24:41.684 ] 00:24:41.684 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.684 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.684 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.684 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KemRZ8QBai 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KemRZ8QBai 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.685 [2024-07-24 19:01:26.468989] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.685 [2024-07-24 19:01:26.469134] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KemRZ8QBai 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.685 [2024-07-24 19:01:26.477001] tcp.c:3771:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KemRZ8QBai 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.685 [2024-07-24 19:01:26.489061] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.685 [2024-07-24 19:01:26.489109] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:41.685 nvme0n1 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.685 [ 00:24:41.685 { 00:24:41.685 "name": "nvme0n1", 00:24:41.685 "aliases": [ 00:24:41.685 "31fc88c8-6046-43d8-852f-3d01e0b9a994" 00:24:41.685 ], 00:24:41.685 "product_name": "NVMe disk", 00:24:41.685 "block_size": 512, 00:24:41.685 "num_blocks": 2097152, 00:24:41.685 "uuid": "31fc88c8-6046-43d8-852f-3d01e0b9a994", 00:24:41.685 "assigned_rate_limits": { 00:24:41.685 "rw_ios_per_sec": 0, 00:24:41.685 "rw_mbytes_per_sec": 0, 00:24:41.685 "r_mbytes_per_sec": 0, 00:24:41.685 "w_mbytes_per_sec": 0 00:24:41.685 }, 00:24:41.685 "claimed": false, 00:24:41.685 "zoned": false, 00:24:41.685 "supported_io_types": { 00:24:41.685 "read": true, 00:24:41.685 "write": true, 00:24:41.685 "unmap": false, 00:24:41.685 "flush": true, 00:24:41.685 "reset": true, 00:24:41.685 "nvme_admin": true, 00:24:41.685 "nvme_io": true, 00:24:41.685 "nvme_io_md": false, 00:24:41.685 "write_zeroes": true, 00:24:41.685 "zcopy": false, 00:24:41.685 "get_zone_info": false, 00:24:41.685 "zone_management": false, 00:24:41.685 "zone_append": false, 00:24:41.685 "compare": true, 00:24:41.685 "compare_and_write": true, 00:24:41.685 "abort": true, 00:24:41.685 "seek_hole": false, 00:24:41.685 "seek_data": false, 00:24:41.685 "copy": true, 00:24:41.685 "nvme_iov_md": false 00:24:41.685 }, 00:24:41.685 "memory_domains": [ 00:24:41.685 { 00:24:41.685 "dma_device_id": "system", 00:24:41.685 "dma_device_type": 1 00:24:41.685 } 00:24:41.685 ], 00:24:41.685 "driver_specific": { 00:24:41.685 "nvme": [ 00:24:41.685 { 00:24:41.685 "trid": { 00:24:41.685 "trtype": "TCP", 00:24:41.685 "adrfam": "IPv4", 00:24:41.685 "traddr": "10.0.0.2", 00:24:41.685 "trsvcid": "4421", 00:24:41.685 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:41.685 }, 00:24:41.685 "ctrlr_data": { 00:24:41.685 "cntlid": 3, 00:24:41.685 "vendor_id": "0x8086", 00:24:41.685 "model_number": "SPDK bdev Controller", 00:24:41.685 "serial_number": "00000000000000000000", 00:24:41.685 "firmware_revision": "24.09", 00:24:41.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.685 "oacs": { 00:24:41.685 "security": 0, 00:24:41.685 "format": 0, 00:24:41.685 "firmware": 0, 00:24:41.685 "ns_manage": 0 00:24:41.685 }, 00:24:41.685 "multi_ctrlr": true, 00:24:41.685 "ana_reporting": false 00:24:41.685 }, 00:24:41.685 "vs": { 00:24:41.685 "nvme_version": "1.3" 00:24:41.685 }, 00:24:41.685 "ns_data": { 00:24:41.685 "id": 1, 00:24:41.685 "can_share": true 00:24:41.685 } 00:24:41.685 } 00:24:41.685 ], 00:24:41.685 "mp_policy": "active_passive" 00:24:41.685 } 00:24:41.685 } 00:24:41.685 ] 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.KemRZ8QBai 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:41.685 rmmod nvme_tcp 00:24:41.685 rmmod nvme_fabrics 00:24:41.685 rmmod nvme_keyring 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2601146 ']' 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2601146 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2601146 ']' 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2601146 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.685 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2601146 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2601146' 00:24:41.945 killing process with pid 2601146 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2601146 00:24:41.945 [2024-07-24 19:01:26.697504] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:41.945 [2024-07-24 19:01:26.697535] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2601146 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.945 19:01:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.481 19:01:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:44.481 00:24:44.481 real 0m9.375s 00:24:44.481 user 0m3.028s 00:24:44.481 sys 0m4.793s 00:24:44.481 19:01:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:44.481 19:01:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 ************************************ 00:24:44.481 END TEST nvmf_async_init 00:24:44.481 ************************************ 00:24:44.481 19:01:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:44.481 19:01:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:44.481 19:01:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:44.481 19:01:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 ************************************ 00:24:44.481 START TEST dma 00:24:44.481 ************************************ 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:44.481 * Looking for test storage... 00:24:44.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.481 19:01:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:44.482 00:24:44.482 real 0m0.117s 00:24:44.482 user 0m0.054s 00:24:44.482 sys 0m0.071s 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 ************************************ 00:24:44.482 END TEST dma 00:24:44.482 ************************************ 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 ************************************ 00:24:44.482 START TEST nvmf_identify 00:24:44.482 ************************************ 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:44.482 * Looking for test storage... 00:24:44.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:44.482 19:01:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:50.505 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:50.505 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:50.505 Found net devices under 0000:af:00.0: cvl_0_0 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:50.505 Found net devices under 0000:af:00.1: cvl_0_1 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.505 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.506 19:01:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:50.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:24:50.506 00:24:50.506 --- 10.0.0.2 ping statistics --- 00:24:50.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.506 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:24:50.506 00:24:50.506 --- 10.0.0.1 ping statistics --- 00:24:50.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.506 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2605026 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2605026 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2605026 ']' 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.506 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.506 [2024-07-24 19:01:35.245680] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:24:50.506 [2024-07-24 19:01:35.245735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.506 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.506 [2024-07-24 19:01:35.325872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.506 [2024-07-24 19:01:35.422145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.506 [2024-07-24 19:01:35.422189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.506 [2024-07-24 19:01:35.422201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.506 [2024-07-24 19:01:35.422210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.506 [2024-07-24 19:01:35.422217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.506 [2024-07-24 19:01:35.422271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.506 [2024-07-24 19:01:35.422384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.506 [2024-07-24 19:01:35.422470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.506 [2024-07-24 19:01:35.422471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.766 [2024-07-24 19:01:35.536187] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.766 Malloc0 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.766 [2024-07-24 19:01:35.632104] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.766 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:50.767 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.767 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.767 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.767 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:50.767 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.767 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.767 [ 00:24:50.767 { 00:24:50.767 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:50.767 "subtype": "Discovery", 00:24:50.767 "listen_addresses": [ 00:24:50.767 { 00:24:50.767 "trtype": "TCP", 00:24:50.767 "adrfam": "IPv4", 00:24:50.767 "traddr": "10.0.0.2", 00:24:50.767 "trsvcid": "4420" 00:24:50.767 } 00:24:50.767 ], 00:24:50.767 "allow_any_host": true, 00:24:50.767 "hosts": [] 00:24:50.767 }, 00:24:50.767 { 00:24:50.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.767 "subtype": "NVMe", 00:24:50.767 "listen_addresses": [ 00:24:50.767 { 00:24:50.767 "trtype": "TCP", 00:24:50.767 "adrfam": "IPv4", 00:24:50.767 "traddr": "10.0.0.2", 00:24:50.767 "trsvcid": "4420" 00:24:50.767 } 00:24:50.767 ], 00:24:50.767 "allow_any_host": true, 00:24:50.767 "hosts": [], 00:24:50.767 "serial_number": "SPDK00000000000001", 00:24:50.767 "model_number": "SPDK bdev Controller", 00:24:50.767 "max_namespaces": 32, 00:24:50.767 "min_cntlid": 1, 00:24:50.767 "max_cntlid": 65519, 00:24:50.767 "namespaces": [ 00:24:50.767 { 00:24:50.767 "nsid": 1, 00:24:50.767 "bdev_name": "Malloc0", 00:24:50.767 "name": "Malloc0", 00:24:50.767 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:50.767 "eui64": "ABCDEF0123456789", 00:24:50.767 "uuid": "e3878ac2-7bdc-41cb-9a96-1bb72b6c1661" 00:24:50.767 } 00:24:50.767 ] 00:24:50.767 } 00:24:50.767 ] 00:24:50.767 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.767 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:50.767 [2024-07-24 19:01:35.683366] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:24:50.767 [2024-07-24 19:01:35.683400] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605052 ] 00:24:50.767 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.767 [2024-07-24 19:01:35.722147] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:50.767 [2024-07-24 19:01:35.722203] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:50.767 [2024-07-24 19:01:35.722210] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:50.767 [2024-07-24 19:01:35.722224] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:50.767 [2024-07-24 19:01:35.722235] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:50.767 [2024-07-24 19:01:35.722580] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:50.767 [2024-07-24 19:01:35.722620] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa3bec0 0 00:24:50.767 [2024-07-24 19:01:35.736612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:50.767 [2024-07-24 19:01:35.736640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:50.767 [2024-07-24 19:01:35.736646] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:50.767 [2024-07-24 19:01:35.736652] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:50.767 [2024-07-24 19:01:35.736700] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.736708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.736714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa3bec0) 00:24:50.767 [2024-07-24 19:01:35.736731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:50.767 [2024-07-24 19:01:35.736753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabee40, cid 0, qid 0 00:24:50.767 [2024-07-24 19:01:35.743616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.767 [2024-07-24 19:01:35.743629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.767 [2024-07-24 19:01:35.743634] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.743640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabee40) on tqpair=0xa3bec0 00:24:50.767 [2024-07-24 19:01:35.743656] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:50.767 [2024-07-24 19:01:35.743665] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:50.767 [2024-07-24 19:01:35.743672] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:50.767 [2024-07-24 19:01:35.743688] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.743694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.743699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa3bec0) 00:24:50.767 [2024-07-24 19:01:35.743709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.767 [2024-07-24 19:01:35.743726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabee40, cid 0, qid 0 00:24:50.767 [2024-07-24 19:01:35.743945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.767 [2024-07-24 19:01:35.743955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.767 [2024-07-24 19:01:35.743959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.743964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabee40) on tqpair=0xa3bec0 00:24:50.767 [2024-07-24 19:01:35.743974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:50.767 [2024-07-24 19:01:35.743985] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:50.767 [2024-07-24 19:01:35.743995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.744000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.744005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa3bec0) 00:24:50.767 [2024-07-24 19:01:35.744014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.767 [2024-07-24 19:01:35.744028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabee40, cid 0, qid 0 00:24:50.767 [2024-07-24 19:01:35.744140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.767 [2024-07-24 19:01:35.744148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.767 [2024-07-24 19:01:35.744153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.744158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabee40) on tqpair=0xa3bec0 00:24:50.767 [2024-07-24 19:01:35.744165] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:50.767 [2024-07-24 19:01:35.744176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:50.767 [2024-07-24 19:01:35.744184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.744189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.744194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa3bec0) 00:24:50.767 [2024-07-24 19:01:35.744203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.767 [2024-07-24 19:01:35.744216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabee40, cid 0, qid 0 00:24:50.767 [2024-07-24 19:01:35.744332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.767 [2024-07-24 19:01:35.744341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.767 [2024-07-24 19:01:35.744346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.744351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabee40) on tqpair=0xa3bec0 00:24:50.767 [2024-07-24 19:01:35.744360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:50.767 [2024-07-24 19:01:35.744373] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.744379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.744384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa3bec0) 00:24:50.767 [2024-07-24 19:01:35.744392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.767 [2024-07-24 19:01:35.744406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabee40, cid 0, qid 0 00:24:50.767 [2024-07-24 19:01:35.744522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.767 [2024-07-24 19:01:35.744531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.767 [2024-07-24 19:01:35.744535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.767 [2024-07-24 19:01:35.744540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabee40) on tqpair=0xa3bec0 00:24:50.767 [2024-07-24 19:01:35.744546] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:50.767 [2024-07-24 19:01:35.744552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:50.767 [2024-07-24 19:01:35.744563] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:50.767 [2024-07-24 19:01:35.744670] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:50.767 [2024-07-24 19:01:35.744676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:50.767 [2024-07-24 19:01:35.744687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.744693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.744698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa3bec0) 00:24:50.768 [2024-07-24 19:01:35.744706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.768 [2024-07-24 19:01:35.744721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabee40, cid 0, qid 0 00:24:50.768 [2024-07-24 19:01:35.744832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.768 [2024-07-24 19:01:35.744841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.768 [2024-07-24 19:01:35.744845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.744850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabee40) on tqpair=0xa3bec0 00:24:50.768 [2024-07-24 19:01:35.744857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:50.768 [2024-07-24 19:01:35.744869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.744875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.744880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa3bec0) 00:24:50.768 [2024-07-24 19:01:35.744889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.768 [2024-07-24 19:01:35.744902] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabee40, cid 0, qid 0 00:24:50.768 [2024-07-24 19:01:35.745021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.768 [2024-07-24 19:01:35.745030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.768 [2024-07-24 19:01:35.745035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabee40) on tqpair=0xa3bec0 00:24:50.768 [2024-07-24 19:01:35.745048] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:50.768 [2024-07-24 19:01:35.745055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:50.768 [2024-07-24 19:01:35.745065] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:50.768 [2024-07-24 19:01:35.745075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:50.768 [2024-07-24 19:01:35.745088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa3bec0) 00:24:50.768 [2024-07-24 19:01:35.745102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.768 [2024-07-24 19:01:35.745115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabee40, cid 0, qid 0 00:24:50.768 [2024-07-24 19:01:35.745322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.768 [2024-07-24 19:01:35.745330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.768 [2024-07-24 19:01:35.745335] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745340] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa3bec0): datao=0, datal=4096, cccid=0 00:24:50.768 [2024-07-24 19:01:35.745346] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabee40) on tqpair(0xa3bec0): expected_datao=0, payload_size=4096 00:24:50.768 [2024-07-24 19:01:35.745352] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745362] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745368] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.768 [2024-07-24 19:01:35.745426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.768 [2024-07-24 19:01:35.745431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabee40) on tqpair=0xa3bec0 00:24:50.768 [2024-07-24 19:01:35.745445] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:50.768 [2024-07-24 19:01:35.745451] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:50.768 [2024-07-24 19:01:35.745457] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:50.768 [2024-07-24 19:01:35.745464] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:50.768 [2024-07-24 19:01:35.745470] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:50.768 [2024-07-24 19:01:35.745476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:50.768 [2024-07-24 19:01:35.745488] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:50.768 [2024-07-24 19:01:35.745500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa3bec0) 00:24:50.768 [2024-07-24 19:01:35.745519] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:50.768 [2024-07-24 19:01:35.745535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabee40, cid 0, qid 0 00:24:50.768 [2024-07-24 19:01:35.745660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.768 [2024-07-24 19:01:35.745670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.768 [2024-07-24 19:01:35.745674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabee40) on tqpair=0xa3bec0 00:24:50.768 [2024-07-24 19:01:35.745689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa3bec0) 00:24:50.768 [2024-07-24 19:01:35.745707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.768 [2024-07-24 19:01:35.745715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa3bec0) 00:24:50.768 [2024-07-24 19:01:35.745732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.768 [2024-07-24 19:01:35.745739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa3bec0) 00:24:50.768 [2024-07-24 19:01:35.745757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.768 [2024-07-24 19:01:35.745764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:50.768 [2024-07-24 19:01:35.745781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.768 [2024-07-24 19:01:35.745787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:50.768 [2024-07-24 19:01:35.745801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:50.768 [2024-07-24 19:01:35.745810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.745815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa3bec0) 00:24:50.768 [2024-07-24 19:01:35.745824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.768 [2024-07-24 19:01:35.745839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabee40, cid 0, qid 0 00:24:50.768 [2024-07-24 19:01:35.745846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabefc0, cid 1, qid 0 00:24:50.768 [2024-07-24 19:01:35.745852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf140, cid 2, qid 0 00:24:50.768 [2024-07-24 19:01:35.745858] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:50.768 [2024-07-24 19:01:35.745864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf440, cid 4, qid 0 00:24:50.768 [2024-07-24 19:01:35.746051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.768 [2024-07-24 19:01:35.746060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.768 [2024-07-24 19:01:35.746064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.746072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf440) on tqpair=0xa3bec0 00:24:50.768 [2024-07-24 19:01:35.746079] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:50.768 [2024-07-24 19:01:35.746085] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:50.768 [2024-07-24 19:01:35.746099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.746105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa3bec0) 00:24:50.768 [2024-07-24 19:01:35.746114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.768 [2024-07-24 19:01:35.746127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf440, cid 4, qid 0 00:24:50.768 [2024-07-24 19:01:35.746253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.768 [2024-07-24 19:01:35.746262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.768 [2024-07-24 19:01:35.746267] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.746272] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa3bec0): datao=0, datal=4096, cccid=4 00:24:50.768 [2024-07-24 19:01:35.746277] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabf440) on tqpair(0xa3bec0): expected_datao=0, payload_size=4096 00:24:50.768 [2024-07-24 19:01:35.746283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.768 [2024-07-24 19:01:35.746292] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746297] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.769 [2024-07-24 19:01:35.746352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.769 [2024-07-24 19:01:35.746357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf440) on tqpair=0xa3bec0 00:24:50.769 [2024-07-24 19:01:35.746377] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:50.769 [2024-07-24 19:01:35.746404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa3bec0) 00:24:50.769 [2024-07-24 19:01:35.746419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.769 [2024-07-24 19:01:35.746427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa3bec0) 00:24:50.769 [2024-07-24 19:01:35.746445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.769 [2024-07-24 19:01:35.746463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf440, cid 4, qid 0 00:24:50.769 [2024-07-24 19:01:35.746470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf5c0, cid 5, qid 0 00:24:50.769 [2024-07-24 19:01:35.746653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.769 [2024-07-24 19:01:35.746662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.769 [2024-07-24 19:01:35.746667] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746672] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa3bec0): datao=0, datal=1024, cccid=4 00:24:50.769 [2024-07-24 19:01:35.746677] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabf440) on tqpair(0xa3bec0): expected_datao=0, payload_size=1024 00:24:50.769 [2024-07-24 19:01:35.746686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746695] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746700] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.769 [2024-07-24 19:01:35.746715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.769 [2024-07-24 19:01:35.746719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.769 [2024-07-24 19:01:35.746724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf5c0) on tqpair=0xa3bec0 00:24:51.036 [2024-07-24 19:01:35.791612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.036 [2024-07-24 19:01:35.791628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.036 [2024-07-24 19:01:35.791633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.791639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf440) on tqpair=0xa3bec0 00:24:51.036 [2024-07-24 19:01:35.791663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.791669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa3bec0) 00:24:51.036 [2024-07-24 19:01:35.791680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.036 [2024-07-24 19:01:35.791702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf440, cid 4, qid 0 00:24:51.036 [2024-07-24 19:01:35.791914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.036 [2024-07-24 19:01:35.791923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.036 [2024-07-24 19:01:35.791928] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.791933] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa3bec0): datao=0, datal=3072, cccid=4 00:24:51.036 [2024-07-24 19:01:35.791938] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabf440) on tqpair(0xa3bec0): expected_datao=0, payload_size=3072 00:24:51.036 [2024-07-24 19:01:35.791945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.791954] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.791959] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.792022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.036 [2024-07-24 19:01:35.792030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.036 [2024-07-24 19:01:35.792035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.792040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf440) on tqpair=0xa3bec0 00:24:51.036 [2024-07-24 19:01:35.792051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.792057] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa3bec0) 00:24:51.036 [2024-07-24 19:01:35.792066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.036 [2024-07-24 19:01:35.792084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf440, cid 4, qid 0 00:24:51.036 [2024-07-24 19:01:35.792220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.036 [2024-07-24 19:01:35.792228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.036 [2024-07-24 19:01:35.792233] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.792238] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa3bec0): datao=0, datal=8, cccid=4 00:24:51.036 [2024-07-24 19:01:35.792244] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabf440) on tqpair(0xa3bec0): expected_datao=0, payload_size=8 00:24:51.036 [2024-07-24 19:01:35.792249] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.792261] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.792266] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.832769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.036 [2024-07-24 19:01:35.832783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.036 [2024-07-24 19:01:35.832789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.036 [2024-07-24 19:01:35.832794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf440) on tqpair=0xa3bec0 00:24:51.037 ===================================================== 00:24:51.037 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:51.037 ===================================================== 00:24:51.037 Controller Capabilities/Features 00:24:51.037 ================================ 00:24:51.037 Vendor ID: 0000 00:24:51.037 Subsystem Vendor ID: 0000 00:24:51.037 Serial Number: .................... 00:24:51.037 Model Number: ........................................ 00:24:51.037 Firmware Version: 24.09 00:24:51.037 Recommended Arb Burst: 0 00:24:51.037 IEEE OUI Identifier: 00 00 00 00:24:51.037 Multi-path I/O 00:24:51.037 May have multiple subsystem ports: No 00:24:51.037 May have multiple controllers: No 00:24:51.037 Associated with SR-IOV VF: No 00:24:51.037 Max Data Transfer Size: 131072 00:24:51.037 Max Number of Namespaces: 0 00:24:51.037 Max Number of I/O Queues: 1024 00:24:51.037 NVMe Specification Version (VS): 1.3 00:24:51.037 NVMe Specification Version (Identify): 1.3 00:24:51.037 Maximum Queue Entries: 128 00:24:51.037 Contiguous Queues Required: Yes 00:24:51.037 Arbitration Mechanisms Supported 00:24:51.037 Weighted Round Robin: Not Supported 00:24:51.037 Vendor Specific: Not Supported 00:24:51.037 Reset Timeout: 15000 ms 00:24:51.037 Doorbell Stride: 4 bytes 00:24:51.037 NVM Subsystem Reset: Not Supported 00:24:51.037 Command Sets Supported 00:24:51.037 NVM Command Set: Supported 00:24:51.037 Boot Partition: Not Supported 00:24:51.037 Memory Page Size Minimum: 4096 bytes 00:24:51.037 Memory Page Size Maximum: 4096 bytes 00:24:51.037 Persistent Memory Region: Not Supported 00:24:51.037 Optional Asynchronous Events Supported 00:24:51.037 Namespace Attribute Notices: Not Supported 00:24:51.037 Firmware Activation Notices: Not Supported 00:24:51.037 ANA Change Notices: Not Supported 00:24:51.037 PLE Aggregate Log Change Notices: Not Supported 00:24:51.037 LBA Status Info Alert Notices: Not Supported 00:24:51.037 EGE Aggregate Log Change Notices: Not Supported 00:24:51.037 Normal NVM Subsystem Shutdown event: Not Supported 00:24:51.037 Zone Descriptor Change Notices: Not Supported 00:24:51.037 Discovery Log Change Notices: Supported 00:24:51.037 Controller Attributes 00:24:51.037 128-bit Host Identifier: Not Supported 00:24:51.037 Non-Operational Permissive Mode: Not Supported 00:24:51.037 NVM Sets: Not Supported 00:24:51.037 Read Recovery Levels: Not Supported 00:24:51.037 Endurance Groups: Not Supported 00:24:51.037 Predictable Latency Mode: Not Supported 00:24:51.037 Traffic Based Keep ALive: Not Supported 00:24:51.037 Namespace Granularity: Not Supported 00:24:51.037 SQ Associations: Not Supported 00:24:51.037 UUID List: Not Supported 00:24:51.037 Multi-Domain Subsystem: Not Supported 00:24:51.037 Fixed Capacity Management: Not Supported 00:24:51.037 Variable Capacity Management: Not Supported 00:24:51.037 Delete Endurance Group: Not Supported 00:24:51.037 Delete NVM Set: Not Supported 00:24:51.037 Extended LBA Formats Supported: Not Supported 00:24:51.037 Flexible Data Placement Supported: Not Supported 00:24:51.037 00:24:51.037 Controller Memory Buffer Support 00:24:51.037 ================================ 00:24:51.037 Supported: No 00:24:51.037 00:24:51.037 Persistent Memory Region Support 00:24:51.037 ================================ 00:24:51.037 Supported: No 00:24:51.037 00:24:51.037 Admin Command Set Attributes 00:24:51.037 ============================ 00:24:51.037 Security Send/Receive: Not Supported 00:24:51.037 Format NVM: Not Supported 00:24:51.037 Firmware Activate/Download: Not Supported 00:24:51.037 Namespace Management: Not Supported 00:24:51.037 Device Self-Test: Not Supported 00:24:51.037 Directives: Not Supported 00:24:51.037 NVMe-MI: Not Supported 00:24:51.037 Virtualization Management: Not Supported 00:24:51.037 Doorbell Buffer Config: Not Supported 00:24:51.037 Get LBA Status Capability: Not Supported 00:24:51.037 Command & Feature Lockdown Capability: Not Supported 00:24:51.037 Abort Command Limit: 1 00:24:51.037 Async Event Request Limit: 4 00:24:51.037 Number of Firmware Slots: N/A 00:24:51.037 Firmware Slot 1 Read-Only: N/A 00:24:51.037 Firmware Activation Without Reset: N/A 00:24:51.037 Multiple Update Detection Support: N/A 00:24:51.037 Firmware Update Granularity: No Information Provided 00:24:51.037 Per-Namespace SMART Log: No 00:24:51.037 Asymmetric Namespace Access Log Page: Not Supported 00:24:51.037 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:51.037 Command Effects Log Page: Not Supported 00:24:51.037 Get Log Page Extended Data: Supported 00:24:51.037 Telemetry Log Pages: Not Supported 00:24:51.037 Persistent Event Log Pages: Not Supported 00:24:51.037 Supported Log Pages Log Page: May Support 00:24:51.037 Commands Supported & Effects Log Page: Not Supported 00:24:51.037 Feature Identifiers & Effects Log Page:May Support 00:24:51.037 NVMe-MI Commands & Effects Log Page: May Support 00:24:51.037 Data Area 4 for Telemetry Log: Not Supported 00:24:51.037 Error Log Page Entries Supported: 128 00:24:51.037 Keep Alive: Not Supported 00:24:51.037 00:24:51.037 NVM Command Set Attributes 00:24:51.037 ========================== 00:24:51.037 Submission Queue Entry Size 00:24:51.037 Max: 1 00:24:51.037 Min: 1 00:24:51.037 Completion Queue Entry Size 00:24:51.037 Max: 1 00:24:51.037 Min: 1 00:24:51.037 Number of Namespaces: 0 00:24:51.037 Compare Command: Not Supported 00:24:51.037 Write Uncorrectable Command: Not Supported 00:24:51.037 Dataset Management Command: Not Supported 00:24:51.037 Write Zeroes Command: Not Supported 00:24:51.037 Set Features Save Field: Not Supported 00:24:51.037 Reservations: Not Supported 00:24:51.037 Timestamp: Not Supported 00:24:51.037 Copy: Not Supported 00:24:51.037 Volatile Write Cache: Not Present 00:24:51.037 Atomic Write Unit (Normal): 1 00:24:51.037 Atomic Write Unit (PFail): 1 00:24:51.037 Atomic Compare & Write Unit: 1 00:24:51.037 Fused Compare & Write: Supported 00:24:51.037 Scatter-Gather List 00:24:51.037 SGL Command Set: Supported 00:24:51.037 SGL Keyed: Supported 00:24:51.037 SGL Bit Bucket Descriptor: Not Supported 00:24:51.037 SGL Metadata Pointer: Not Supported 00:24:51.037 Oversized SGL: Not Supported 00:24:51.037 SGL Metadata Address: Not Supported 00:24:51.037 SGL Offset: Supported 00:24:51.037 Transport SGL Data Block: Not Supported 00:24:51.037 Replay Protected Memory Block: Not Supported 00:24:51.037 00:24:51.037 Firmware Slot Information 00:24:51.037 ========================= 00:24:51.037 Active slot: 0 00:24:51.037 00:24:51.037 00:24:51.037 Error Log 00:24:51.037 ========= 00:24:51.037 00:24:51.037 Active Namespaces 00:24:51.037 ================= 00:24:51.037 Discovery Log Page 00:24:51.037 ================== 00:24:51.037 Generation Counter: 2 00:24:51.037 Number of Records: 2 00:24:51.037 Record Format: 0 00:24:51.037 00:24:51.037 Discovery Log Entry 0 00:24:51.037 ---------------------- 00:24:51.037 Transport Type: 3 (TCP) 00:24:51.037 Address Family: 1 (IPv4) 00:24:51.037 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:51.037 Entry Flags: 00:24:51.037 Duplicate Returned Information: 1 00:24:51.037 Explicit Persistent Connection Support for Discovery: 1 00:24:51.037 Transport Requirements: 00:24:51.037 Secure Channel: Not Required 00:24:51.037 Port ID: 0 (0x0000) 00:24:51.037 Controller ID: 65535 (0xffff) 00:24:51.037 Admin Max SQ Size: 128 00:24:51.037 Transport Service Identifier: 4420 00:24:51.037 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:51.037 Transport Address: 10.0.0.2 00:24:51.037 Discovery Log Entry 1 00:24:51.037 ---------------------- 00:24:51.037 Transport Type: 3 (TCP) 00:24:51.037 Address Family: 1 (IPv4) 00:24:51.037 Subsystem Type: 2 (NVM Subsystem) 00:24:51.037 Entry Flags: 00:24:51.037 Duplicate Returned Information: 0 00:24:51.037 Explicit Persistent Connection Support for Discovery: 0 00:24:51.037 Transport Requirements: 00:24:51.037 Secure Channel: Not Required 00:24:51.037 Port ID: 0 (0x0000) 00:24:51.037 Controller ID: 65535 (0xffff) 00:24:51.037 Admin Max SQ Size: 128 00:24:51.037 Transport Service Identifier: 4420 00:24:51.037 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:51.038 Transport Address: 10.0.0.2 [2024-07-24 19:01:35.832899] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:51.038 [2024-07-24 19:01:35.832914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabee40) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.832922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.038 [2024-07-24 19:01:35.832930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabefc0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.832936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.038 [2024-07-24 19:01:35.832942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf140) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.832949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.038 [2024-07-24 19:01:35.832955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.832961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.038 [2024-07-24 19:01:35.832975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.832981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.832986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.038 [2024-07-24 19:01:35.832995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.038 [2024-07-24 19:01:35.833014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.038 [2024-07-24 19:01:35.833120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.038 [2024-07-24 19:01:35.833129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.038 [2024-07-24 19:01:35.833134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.833148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.038 [2024-07-24 19:01:35.833167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.038 [2024-07-24 19:01:35.833185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.038 [2024-07-24 19:01:35.833308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.038 [2024-07-24 19:01:35.833316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.038 [2024-07-24 19:01:35.833321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.833332] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:51.038 [2024-07-24 19:01:35.833338] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:51.038 [2024-07-24 19:01:35.833353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.038 [2024-07-24 19:01:35.833372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.038 [2024-07-24 19:01:35.833386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.038 [2024-07-24 19:01:35.833502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.038 [2024-07-24 19:01:35.833511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.038 [2024-07-24 19:01:35.833515] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.833533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.038 [2024-07-24 19:01:35.833552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.038 [2024-07-24 19:01:35.833566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.038 [2024-07-24 19:01:35.833693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.038 [2024-07-24 19:01:35.833702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.038 [2024-07-24 19:01:35.833706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.833723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.038 [2024-07-24 19:01:35.833742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.038 [2024-07-24 19:01:35.833756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.038 [2024-07-24 19:01:35.833868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.038 [2024-07-24 19:01:35.833876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.038 [2024-07-24 19:01:35.833881] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.833898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.833908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.038 [2024-07-24 19:01:35.833917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.038 [2024-07-24 19:01:35.833930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.038 [2024-07-24 19:01:35.834046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.038 [2024-07-24 19:01:35.834054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.038 [2024-07-24 19:01:35.834059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.834076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.038 [2024-07-24 19:01:35.834098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.038 [2024-07-24 19:01:35.834111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.038 [2024-07-24 19:01:35.834227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.038 [2024-07-24 19:01:35.834236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.038 [2024-07-24 19:01:35.834240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.834257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.038 [2024-07-24 19:01:35.834276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.038 [2024-07-24 19:01:35.834289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.038 [2024-07-24 19:01:35.834398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.038 [2024-07-24 19:01:35.834407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.038 [2024-07-24 19:01:35.834411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.834429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.038 [2024-07-24 19:01:35.834448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.038 [2024-07-24 19:01:35.834461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.038 [2024-07-24 19:01:35.834580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.038 [2024-07-24 19:01:35.834588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.038 [2024-07-24 19:01:35.834593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.834617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.038 [2024-07-24 19:01:35.834636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.038 [2024-07-24 19:01:35.834650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.038 [2024-07-24 19:01:35.834762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.038 [2024-07-24 19:01:35.834770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.038 [2024-07-24 19:01:35.834775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.038 [2024-07-24 19:01:35.834792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.038 [2024-07-24 19:01:35.834805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.039 [2024-07-24 19:01:35.834814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.039 [2024-07-24 19:01:35.834827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.039 [2024-07-24 19:01:35.834947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.039 [2024-07-24 19:01:35.834956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.039 [2024-07-24 19:01:35.834960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.834965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.039 [2024-07-24 19:01:35.834977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.834983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.834988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.039 [2024-07-24 19:01:35.834996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.039 [2024-07-24 19:01:35.835009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.039 [2024-07-24 19:01:35.835126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.039 [2024-07-24 19:01:35.835134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.039 [2024-07-24 19:01:35.835139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.835144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.039 [2024-07-24 19:01:35.835156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.835162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.835167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.039 [2024-07-24 19:01:35.835175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.039 [2024-07-24 19:01:35.835188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.039 [2024-07-24 19:01:35.835298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.039 [2024-07-24 19:01:35.835307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.039 [2024-07-24 19:01:35.835311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.835316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.039 [2024-07-24 19:01:35.835328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.835334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.835338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.039 [2024-07-24 19:01:35.835347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.039 [2024-07-24 19:01:35.835360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.039 [2024-07-24 19:01:35.835469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.039 [2024-07-24 19:01:35.835478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.039 [2024-07-24 19:01:35.835482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.835487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.039 [2024-07-24 19:01:35.835499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.835505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.835510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.039 [2024-07-24 19:01:35.835520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.039 [2024-07-24 19:01:35.835534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.039 [2024-07-24 19:01:35.839614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.039 [2024-07-24 19:01:35.839626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.039 [2024-07-24 19:01:35.839631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.839636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.039 [2024-07-24 19:01:35.839650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.839656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.839661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa3bec0) 00:24:51.039 [2024-07-24 19:01:35.839670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.039 [2024-07-24 19:01:35.839686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabf2c0, cid 3, qid 0 00:24:51.039 [2024-07-24 19:01:35.839944] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.039 [2024-07-24 19:01:35.839952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.039 [2024-07-24 19:01:35.839957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.839962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xabf2c0) on tqpair=0xa3bec0 00:24:51.039 [2024-07-24 19:01:35.839972] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:51.039 00:24:51.039 19:01:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:51.039 [2024-07-24 19:01:35.884687] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:24:51.039 [2024-07-24 19:01:35.884740] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605062 ] 00:24:51.039 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.039 [2024-07-24 19:01:35.921384] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:51.039 [2024-07-24 19:01:35.921437] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:51.039 [2024-07-24 19:01:35.921444] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:51.039 [2024-07-24 19:01:35.921458] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:51.039 [2024-07-24 19:01:35.921467] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:51.039 [2024-07-24 19:01:35.921837] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:51.039 [2024-07-24 19:01:35.921864] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x826ec0 0 00:24:51.039 [2024-07-24 19:01:35.928617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:51.039 [2024-07-24 19:01:35.928636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:51.039 [2024-07-24 19:01:35.928641] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:51.039 [2024-07-24 19:01:35.928646] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:51.039 [2024-07-24 19:01:35.928690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.928697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.928703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x826ec0) 00:24:51.039 [2024-07-24 19:01:35.928717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:51.039 [2024-07-24 19:01:35.928737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9e40, cid 0, qid 0 00:24:51.039 [2024-07-24 19:01:35.936615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.039 [2024-07-24 19:01:35.936628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.039 [2024-07-24 19:01:35.936632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.936637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9e40) on tqpair=0x826ec0 00:24:51.039 [2024-07-24 19:01:35.936648] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:51.039 [2024-07-24 19:01:35.936656] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:51.039 [2024-07-24 19:01:35.936662] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:51.039 [2024-07-24 19:01:35.936677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.936682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.936687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x826ec0) 00:24:51.039 [2024-07-24 19:01:35.936696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.039 [2024-07-24 19:01:35.936713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9e40, cid 0, qid 0 00:24:51.039 [2024-07-24 19:01:35.936969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.039 [2024-07-24 19:01:35.936978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.039 [2024-07-24 19:01:35.936982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.936987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9e40) on tqpair=0x826ec0 00:24:51.039 [2024-07-24 19:01:35.936996] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:51.039 [2024-07-24 19:01:35.937007] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:51.039 [2024-07-24 19:01:35.937016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.937021] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.937025] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x826ec0) 00:24:51.039 [2024-07-24 19:01:35.937035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.039 [2024-07-24 19:01:35.937049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9e40, cid 0, qid 0 00:24:51.039 [2024-07-24 19:01:35.937202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.039 [2024-07-24 19:01:35.937211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.039 [2024-07-24 19:01:35.937215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.039 [2024-07-24 19:01:35.937220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9e40) on tqpair=0x826ec0 00:24:51.039 [2024-07-24 19:01:35.937226] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:51.040 [2024-07-24 19:01:35.937237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:51.040 [2024-07-24 19:01:35.937246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.937254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.937259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x826ec0) 00:24:51.040 [2024-07-24 19:01:35.937267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.040 [2024-07-24 19:01:35.937281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9e40, cid 0, qid 0 00:24:51.040 [2024-07-24 19:01:35.937441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.040 [2024-07-24 19:01:35.937450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.040 [2024-07-24 19:01:35.937455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.937460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9e40) on tqpair=0x826ec0 00:24:51.040 [2024-07-24 19:01:35.937466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:51.040 [2024-07-24 19:01:35.937479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.937484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.937489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x826ec0) 00:24:51.040 [2024-07-24 19:01:35.937497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.040 [2024-07-24 19:01:35.937511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9e40, cid 0, qid 0 00:24:51.040 [2024-07-24 19:01:35.937674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.040 [2024-07-24 19:01:35.937684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.040 [2024-07-24 19:01:35.937688] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.937693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9e40) on tqpair=0x826ec0 00:24:51.040 [2024-07-24 19:01:35.937699] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:51.040 [2024-07-24 19:01:35.937704] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:51.040 [2024-07-24 19:01:35.937715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:51.040 [2024-07-24 19:01:35.937822] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:51.040 [2024-07-24 19:01:35.937827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:51.040 [2024-07-24 19:01:35.937836] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.937841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.937846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x826ec0) 00:24:51.040 [2024-07-24 19:01:35.937855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.040 [2024-07-24 19:01:35.937870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9e40, cid 0, qid 0 00:24:51.040 [2024-07-24 19:01:35.938028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.040 [2024-07-24 19:01:35.938036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.040 [2024-07-24 19:01:35.938041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.938045] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9e40) on tqpair=0x826ec0 00:24:51.040 [2024-07-24 19:01:35.938051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:51.040 [2024-07-24 19:01:35.938064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.938072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.938077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x826ec0) 00:24:51.040 [2024-07-24 19:01:35.938085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.040 [2024-07-24 19:01:35.938099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9e40, cid 0, qid 0 00:24:51.040 [2024-07-24 19:01:35.938249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.040 [2024-07-24 19:01:35.938258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.040 [2024-07-24 19:01:35.938262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.938267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9e40) on tqpair=0x826ec0 00:24:51.040 [2024-07-24 19:01:35.938272] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:51.040 [2024-07-24 19:01:35.938278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:51.040 [2024-07-24 19:01:35.938288] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:51.040 [2024-07-24 19:01:35.938299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:51.040 [2024-07-24 19:01:35.938310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.938315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x826ec0) 00:24:51.040 [2024-07-24 19:01:35.938323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.040 [2024-07-24 19:01:35.938337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9e40, cid 0, qid 0 00:24:51.040 [2024-07-24 19:01:35.938558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.040 [2024-07-24 19:01:35.938567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.040 [2024-07-24 19:01:35.938571] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.938576] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x826ec0): datao=0, datal=4096, cccid=0 00:24:51.040 [2024-07-24 19:01:35.938582] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8a9e40) on tqpair(0x826ec0): expected_datao=0, payload_size=4096 00:24:51.040 [2024-07-24 19:01:35.938587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.938648] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.938655] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.979729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.040 [2024-07-24 19:01:35.979746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.040 [2024-07-24 19:01:35.979750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.979756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9e40) on tqpair=0x826ec0 00:24:51.040 [2024-07-24 19:01:35.979765] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:51.040 [2024-07-24 19:01:35.979771] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:51.040 [2024-07-24 19:01:35.979777] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:51.040 [2024-07-24 19:01:35.979782] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:51.040 [2024-07-24 19:01:35.979788] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:51.040 [2024-07-24 19:01:35.979798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:51.040 [2024-07-24 19:01:35.979810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:51.040 [2024-07-24 19:01:35.979823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.979829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.979833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x826ec0) 00:24:51.040 [2024-07-24 19:01:35.979843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:51.040 [2024-07-24 19:01:35.979861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9e40, cid 0, qid 0 00:24:51.040 [2024-07-24 19:01:35.980009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.040 [2024-07-24 19:01:35.980018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.040 [2024-07-24 19:01:35.980022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.980027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9e40) on tqpair=0x826ec0 00:24:51.040 [2024-07-24 19:01:35.980036] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.980040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.980045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x826ec0) 00:24:51.040 [2024-07-24 19:01:35.980053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.040 [2024-07-24 19:01:35.980060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.980066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.980070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x826ec0) 00:24:51.040 [2024-07-24 19:01:35.980078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.040 [2024-07-24 19:01:35.980085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.980091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.980096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x826ec0) 00:24:51.040 [2024-07-24 19:01:35.980104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.040 [2024-07-24 19:01:35.980112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.980117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.040 [2024-07-24 19:01:35.980121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.040 [2024-07-24 19:01:35.980128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.040 [2024-07-24 19:01:35.980135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.980149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.980157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.980163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x826ec0) 00:24:51.041 [2024-07-24 19:01:35.980173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.041 [2024-07-24 19:01:35.980188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9e40, cid 0, qid 0 00:24:51.041 [2024-07-24 19:01:35.980198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8a9fc0, cid 1, qid 0 00:24:51.041 [2024-07-24 19:01:35.980206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa140, cid 2, qid 0 00:24:51.041 [2024-07-24 19:01:35.980213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.041 [2024-07-24 19:01:35.980219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa440, cid 4, qid 0 00:24:51.041 [2024-07-24 19:01:35.980507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.041 [2024-07-24 19:01:35.980517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.041 [2024-07-24 19:01:35.980522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.980527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa440) on tqpair=0x826ec0 00:24:51.041 [2024-07-24 19:01:35.980532] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:51.041 [2024-07-24 19:01:35.980539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.980552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.980561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.980570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.980574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.980580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x826ec0) 00:24:51.041 [2024-07-24 19:01:35.980589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:51.041 [2024-07-24 19:01:35.980612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa440, cid 4, qid 0 00:24:51.041 [2024-07-24 19:01:35.980801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.041 [2024-07-24 19:01:35.980811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.041 [2024-07-24 19:01:35.980815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.980820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa440) on tqpair=0x826ec0 00:24:51.041 [2024-07-24 19:01:35.980898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.980910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.980920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.980924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x826ec0) 00:24:51.041 [2024-07-24 19:01:35.980933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.041 [2024-07-24 19:01:35.980947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa440, cid 4, qid 0 00:24:51.041 [2024-07-24 19:01:35.981144] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.041 [2024-07-24 19:01:35.981153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.041 [2024-07-24 19:01:35.981157] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981162] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x826ec0): datao=0, datal=4096, cccid=4 00:24:51.041 [2024-07-24 19:01:35.981167] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8aa440) on tqpair(0x826ec0): expected_datao=0, payload_size=4096 00:24:51.041 [2024-07-24 19:01:35.981173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981186] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981191] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.041 [2024-07-24 19:01:35.981243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.041 [2024-07-24 19:01:35.981247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa440) on tqpair=0x826ec0 00:24:51.041 [2024-07-24 19:01:35.981262] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:51.041 [2024-07-24 19:01:35.981281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.981294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.981303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x826ec0) 00:24:51.041 [2024-07-24 19:01:35.981317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.041 [2024-07-24 19:01:35.981332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa440, cid 4, qid 0 00:24:51.041 [2024-07-24 19:01:35.981519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.041 [2024-07-24 19:01:35.981528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.041 [2024-07-24 19:01:35.981532] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981536] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x826ec0): datao=0, datal=4096, cccid=4 00:24:51.041 [2024-07-24 19:01:35.981542] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8aa440) on tqpair(0x826ec0): expected_datao=0, payload_size=4096 00:24:51.041 [2024-07-24 19:01:35.981547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981556] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981560] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.041 [2024-07-24 19:01:35.981622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.041 [2024-07-24 19:01:35.981626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa440) on tqpair=0x826ec0 00:24:51.041 [2024-07-24 19:01:35.981645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.981657] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:51.041 [2024-07-24 19:01:35.981667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x826ec0) 00:24:51.041 [2024-07-24 19:01:35.981680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.041 [2024-07-24 19:01:35.981695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa440, cid 4, qid 0 00:24:51.041 [2024-07-24 19:01:35.981866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.041 [2024-07-24 19:01:35.981875] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.041 [2024-07-24 19:01:35.981880] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981884] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x826ec0): datao=0, datal=4096, cccid=4 00:24:51.041 [2024-07-24 19:01:35.981892] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8aa440) on tqpair(0x826ec0): expected_datao=0, payload_size=4096 00:24:51.041 [2024-07-24 19:01:35.981898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981906] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981911] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.041 [2024-07-24 19:01:35.981952] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.041 [2024-07-24 19:01:35.981959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.041 [2024-07-24 19:01:35.981964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.981968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa440) on tqpair=0x826ec0 00:24:51.042 [2024-07-24 19:01:35.981978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:51.042 [2024-07-24 19:01:35.981989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:51.042 [2024-07-24 19:01:35.981999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:51.042 [2024-07-24 19:01:35.982009] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:51.042 [2024-07-24 19:01:35.982016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:51.042 [2024-07-24 19:01:35.982022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:51.042 [2024-07-24 19:01:35.982028] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:51.042 [2024-07-24 19:01:35.982034] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:51.042 [2024-07-24 19:01:35.982041] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:51.042 [2024-07-24 19:01:35.982058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.982063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x826ec0) 00:24:51.042 [2024-07-24 19:01:35.982071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.042 [2024-07-24 19:01:35.982080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.982084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.982089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x826ec0) 00:24:51.042 [2024-07-24 19:01:35.982096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.042 [2024-07-24 19:01:35.982114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa440, cid 4, qid 0 00:24:51.042 [2024-07-24 19:01:35.982121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa5c0, cid 5, qid 0 00:24:51.042 [2024-07-24 19:01:35.982338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.042 [2024-07-24 19:01:35.982347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.042 [2024-07-24 19:01:35.982351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.982355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa440) on tqpair=0x826ec0 00:24:51.042 [2024-07-24 19:01:35.982364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.042 [2024-07-24 19:01:35.982371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.042 [2024-07-24 19:01:35.982376] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.982382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa5c0) on tqpair=0x826ec0 00:24:51.042 [2024-07-24 19:01:35.982395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.982400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x826ec0) 00:24:51.042 [2024-07-24 19:01:35.982408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.042 [2024-07-24 19:01:35.982421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa5c0, cid 5, qid 0 00:24:51.042 [2024-07-24 19:01:35.982593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.042 [2024-07-24 19:01:35.982601] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.042 [2024-07-24 19:01:35.982616] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.982621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa5c0) on tqpair=0x826ec0 00:24:51.042 [2024-07-24 19:01:35.982632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.982637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x826ec0) 00:24:51.042 [2024-07-24 19:01:35.982646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.042 [2024-07-24 19:01:35.982659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa5c0, cid 5, qid 0 00:24:51.042 [2024-07-24 19:01:35.982827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.042 [2024-07-24 19:01:35.982836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.042 [2024-07-24 19:01:35.982840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.982845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa5c0) on tqpair=0x826ec0 00:24:51.042 [2024-07-24 19:01:35.982857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.982862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x826ec0) 00:24:51.042 [2024-07-24 19:01:35.982870] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.042 [2024-07-24 19:01:35.982882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa5c0, cid 5, qid 0 00:24:51.042 [2024-07-24 19:01:35.983040] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.042 [2024-07-24 19:01:35.983049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.042 [2024-07-24 19:01:35.983053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa5c0) on tqpair=0x826ec0 00:24:51.042 [2024-07-24 19:01:35.983075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x826ec0) 00:24:51.042 [2024-07-24 19:01:35.983089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.042 [2024-07-24 19:01:35.983098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x826ec0) 00:24:51.042 [2024-07-24 19:01:35.983111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.042 [2024-07-24 19:01:35.983120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x826ec0) 00:24:51.042 [2024-07-24 19:01:35.983135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.042 [2024-07-24 19:01:35.983148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x826ec0) 00:24:51.042 [2024-07-24 19:01:35.983162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.042 [2024-07-24 19:01:35.983177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa5c0, cid 5, qid 0 00:24:51.042 [2024-07-24 19:01:35.983183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa440, cid 4, qid 0 00:24:51.042 [2024-07-24 19:01:35.983189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa740, cid 6, qid 0 00:24:51.042 [2024-07-24 19:01:35.983195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa8c0, cid 7, qid 0 00:24:51.042 [2024-07-24 19:01:35.983618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.042 [2024-07-24 19:01:35.983628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.042 [2024-07-24 19:01:35.983632] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983637] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x826ec0): datao=0, datal=8192, cccid=5 00:24:51.042 [2024-07-24 19:01:35.983642] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8aa5c0) on tqpair(0x826ec0): expected_datao=0, payload_size=8192 00:24:51.042 [2024-07-24 19:01:35.983647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983667] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983672] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.042 [2024-07-24 19:01:35.983686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.042 [2024-07-24 19:01:35.983691] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983695] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x826ec0): datao=0, datal=512, cccid=4 00:24:51.042 [2024-07-24 19:01:35.983701] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8aa440) on tqpair(0x826ec0): expected_datao=0, payload_size=512 00:24:51.042 [2024-07-24 19:01:35.983707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983715] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983720] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.042 [2024-07-24 19:01:35.983734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.042 [2024-07-24 19:01:35.983739] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983745] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x826ec0): datao=0, datal=512, cccid=6 00:24:51.042 [2024-07-24 19:01:35.983752] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8aa740) on tqpair(0x826ec0): expected_datao=0, payload_size=512 00:24:51.042 [2024-07-24 19:01:35.983758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983766] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983770] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.042 [2024-07-24 19:01:35.983784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.042 [2024-07-24 19:01:35.983788] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983793] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x826ec0): datao=0, datal=4096, cccid=7 00:24:51.042 [2024-07-24 19:01:35.983798] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8aa8c0) on tqpair(0x826ec0): expected_datao=0, payload_size=4096 00:24:51.042 [2024-07-24 19:01:35.983807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983815] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.042 [2024-07-24 19:01:35.983819] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.043 [2024-07-24 19:01:35.983829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.043 [2024-07-24 19:01:35.983836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.043 [2024-07-24 19:01:35.983840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.043 [2024-07-24 19:01:35.983845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa5c0) on tqpair=0x826ec0 00:24:51.043 [2024-07-24 19:01:35.983859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.043 [2024-07-24 19:01:35.983867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.043 [2024-07-24 19:01:35.983871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.043 [2024-07-24 19:01:35.983875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa440) on tqpair=0x826ec0 00:24:51.043 [2024-07-24 19:01:35.983887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.043 [2024-07-24 19:01:35.983895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.043 [2024-07-24 19:01:35.983899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.043 [2024-07-24 19:01:35.983904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa740) on tqpair=0x826ec0 00:24:51.043 [2024-07-24 19:01:35.983912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.043 [2024-07-24 19:01:35.983920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.043 [2024-07-24 19:01:35.983924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.043 [2024-07-24 19:01:35.983928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa8c0) on tqpair=0x826ec0 00:24:51.043 ===================================================== 00:24:51.043 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.043 ===================================================== 00:24:51.043 Controller Capabilities/Features 00:24:51.043 ================================ 00:24:51.043 Vendor ID: 8086 00:24:51.043 Subsystem Vendor ID: 8086 00:24:51.043 Serial Number: SPDK00000000000001 00:24:51.043 Model Number: SPDK bdev Controller 00:24:51.043 Firmware Version: 24.09 00:24:51.043 Recommended Arb Burst: 6 00:24:51.043 IEEE OUI Identifier: e4 d2 5c 00:24:51.043 Multi-path I/O 00:24:51.043 May have multiple subsystem ports: Yes 00:24:51.043 May have multiple controllers: Yes 00:24:51.043 Associated with SR-IOV VF: No 00:24:51.043 Max Data Transfer Size: 131072 00:24:51.043 Max Number of Namespaces: 32 00:24:51.043 Max Number of I/O Queues: 127 00:24:51.043 NVMe Specification Version (VS): 1.3 00:24:51.043 NVMe Specification Version (Identify): 1.3 00:24:51.043 Maximum Queue Entries: 128 00:24:51.043 Contiguous Queues Required: Yes 00:24:51.043 Arbitration Mechanisms Supported 00:24:51.043 Weighted Round Robin: Not Supported 00:24:51.043 Vendor Specific: Not Supported 00:24:51.043 Reset Timeout: 15000 ms 00:24:51.043 Doorbell Stride: 4 bytes 00:24:51.043 NVM Subsystem Reset: Not Supported 00:24:51.043 Command Sets Supported 00:24:51.043 NVM Command Set: Supported 00:24:51.043 Boot Partition: Not Supported 00:24:51.043 Memory Page Size Minimum: 4096 bytes 00:24:51.043 Memory Page Size Maximum: 4096 bytes 00:24:51.043 Persistent Memory Region: Not Supported 00:24:51.043 Optional Asynchronous Events Supported 00:24:51.043 Namespace Attribute Notices: Supported 00:24:51.043 Firmware Activation Notices: Not Supported 00:24:51.043 ANA Change Notices: Not Supported 00:24:51.043 PLE Aggregate Log Change Notices: Not Supported 00:24:51.043 LBA Status Info Alert Notices: Not Supported 00:24:51.043 EGE Aggregate Log Change Notices: Not Supported 00:24:51.043 Normal NVM Subsystem Shutdown event: Not Supported 00:24:51.043 Zone Descriptor Change Notices: Not Supported 00:24:51.043 Discovery Log Change Notices: Not Supported 00:24:51.043 Controller Attributes 00:24:51.043 128-bit Host Identifier: Supported 00:24:51.043 Non-Operational Permissive Mode: Not Supported 00:24:51.043 NVM Sets: Not Supported 00:24:51.043 Read Recovery Levels: Not Supported 00:24:51.043 Endurance Groups: Not Supported 00:24:51.043 Predictable Latency Mode: Not Supported 00:24:51.043 Traffic Based Keep ALive: Not Supported 00:24:51.043 Namespace Granularity: Not Supported 00:24:51.043 SQ Associations: Not Supported 00:24:51.043 UUID List: Not Supported 00:24:51.043 Multi-Domain Subsystem: Not Supported 00:24:51.043 Fixed Capacity Management: Not Supported 00:24:51.043 Variable Capacity Management: Not Supported 00:24:51.043 Delete Endurance Group: Not Supported 00:24:51.043 Delete NVM Set: Not Supported 00:24:51.043 Extended LBA Formats Supported: Not Supported 00:24:51.043 Flexible Data Placement Supported: Not Supported 00:24:51.043 00:24:51.043 Controller Memory Buffer Support 00:24:51.043 ================================ 00:24:51.043 Supported: No 00:24:51.043 00:24:51.043 Persistent Memory Region Support 00:24:51.043 ================================ 00:24:51.043 Supported: No 00:24:51.043 00:24:51.043 Admin Command Set Attributes 00:24:51.043 ============================ 00:24:51.043 Security Send/Receive: Not Supported 00:24:51.043 Format NVM: Not Supported 00:24:51.043 Firmware Activate/Download: Not Supported 00:24:51.043 Namespace Management: Not Supported 00:24:51.043 Device Self-Test: Not Supported 00:24:51.043 Directives: Not Supported 00:24:51.043 NVMe-MI: Not Supported 00:24:51.043 Virtualization Management: Not Supported 00:24:51.043 Doorbell Buffer Config: Not Supported 00:24:51.043 Get LBA Status Capability: Not Supported 00:24:51.043 Command & Feature Lockdown Capability: Not Supported 00:24:51.043 Abort Command Limit: 4 00:24:51.043 Async Event Request Limit: 4 00:24:51.043 Number of Firmware Slots: N/A 00:24:51.043 Firmware Slot 1 Read-Only: N/A 00:24:51.043 Firmware Activation Without Reset: N/A 00:24:51.043 Multiple Update Detection Support: N/A 00:24:51.043 Firmware Update Granularity: No Information Provided 00:24:51.043 Per-Namespace SMART Log: No 00:24:51.043 Asymmetric Namespace Access Log Page: Not Supported 00:24:51.043 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:51.043 Command Effects Log Page: Supported 00:24:51.043 Get Log Page Extended Data: Supported 00:24:51.043 Telemetry Log Pages: Not Supported 00:24:51.043 Persistent Event Log Pages: Not Supported 00:24:51.043 Supported Log Pages Log Page: May Support 00:24:51.043 Commands Supported & Effects Log Page: Not Supported 00:24:51.043 Feature Identifiers & Effects Log Page:May Support 00:24:51.043 NVMe-MI Commands & Effects Log Page: May Support 00:24:51.043 Data Area 4 for Telemetry Log: Not Supported 00:24:51.043 Error Log Page Entries Supported: 128 00:24:51.043 Keep Alive: Supported 00:24:51.043 Keep Alive Granularity: 10000 ms 00:24:51.043 00:24:51.043 NVM Command Set Attributes 00:24:51.043 ========================== 00:24:51.043 Submission Queue Entry Size 00:24:51.043 Max: 64 00:24:51.043 Min: 64 00:24:51.043 Completion Queue Entry Size 00:24:51.043 Max: 16 00:24:51.043 Min: 16 00:24:51.043 Number of Namespaces: 32 00:24:51.043 Compare Command: Supported 00:24:51.043 Write Uncorrectable Command: Not Supported 00:24:51.043 Dataset Management Command: Supported 00:24:51.043 Write Zeroes Command: Supported 00:24:51.043 Set Features Save Field: Not Supported 00:24:51.043 Reservations: Supported 00:24:51.043 Timestamp: Not Supported 00:24:51.043 Copy: Supported 00:24:51.043 Volatile Write Cache: Present 00:24:51.043 Atomic Write Unit (Normal): 1 00:24:51.043 Atomic Write Unit (PFail): 1 00:24:51.043 Atomic Compare & Write Unit: 1 00:24:51.043 Fused Compare & Write: Supported 00:24:51.043 Scatter-Gather List 00:24:51.043 SGL Command Set: Supported 00:24:51.043 SGL Keyed: Supported 00:24:51.043 SGL Bit Bucket Descriptor: Not Supported 00:24:51.043 SGL Metadata Pointer: Not Supported 00:24:51.043 Oversized SGL: Not Supported 00:24:51.043 SGL Metadata Address: Not Supported 00:24:51.043 SGL Offset: Supported 00:24:51.043 Transport SGL Data Block: Not Supported 00:24:51.043 Replay Protected Memory Block: Not Supported 00:24:51.043 00:24:51.043 Firmware Slot Information 00:24:51.043 ========================= 00:24:51.043 Active slot: 1 00:24:51.043 Slot 1 Firmware Revision: 24.09 00:24:51.043 00:24:51.043 00:24:51.043 Commands Supported and Effects 00:24:51.043 ============================== 00:24:51.043 Admin Commands 00:24:51.043 -------------- 00:24:51.043 Get Log Page (02h): Supported 00:24:51.043 Identify (06h): Supported 00:24:51.043 Abort (08h): Supported 00:24:51.043 Set Features (09h): Supported 00:24:51.043 Get Features (0Ah): Supported 00:24:51.043 Asynchronous Event Request (0Ch): Supported 00:24:51.043 Keep Alive (18h): Supported 00:24:51.043 I/O Commands 00:24:51.043 ------------ 00:24:51.043 Flush (00h): Supported LBA-Change 00:24:51.043 Write (01h): Supported LBA-Change 00:24:51.043 Read (02h): Supported 00:24:51.043 Compare (05h): Supported 00:24:51.043 Write Zeroes (08h): Supported LBA-Change 00:24:51.043 Dataset Management (09h): Supported LBA-Change 00:24:51.044 Copy (19h): Supported LBA-Change 00:24:51.044 00:24:51.044 Error Log 00:24:51.044 ========= 00:24:51.044 00:24:51.044 Arbitration 00:24:51.044 =========== 00:24:51.044 Arbitration Burst: 1 00:24:51.044 00:24:51.044 Power Management 00:24:51.044 ================ 00:24:51.044 Number of Power States: 1 00:24:51.044 Current Power State: Power State #0 00:24:51.044 Power State #0: 00:24:51.044 Max Power: 0.00 W 00:24:51.044 Non-Operational State: Operational 00:24:51.044 Entry Latency: Not Reported 00:24:51.044 Exit Latency: Not Reported 00:24:51.044 Relative Read Throughput: 0 00:24:51.044 Relative Read Latency: 0 00:24:51.044 Relative Write Throughput: 0 00:24:51.044 Relative Write Latency: 0 00:24:51.044 Idle Power: Not Reported 00:24:51.044 Active Power: Not Reported 00:24:51.044 Non-Operational Permissive Mode: Not Supported 00:24:51.044 00:24:51.044 Health Information 00:24:51.044 ================== 00:24:51.044 Critical Warnings: 00:24:51.044 Available Spare Space: OK 00:24:51.044 Temperature: OK 00:24:51.044 Device Reliability: OK 00:24:51.044 Read Only: No 00:24:51.044 Volatile Memory Backup: OK 00:24:51.044 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:51.044 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:51.044 Available Spare: 0% 00:24:51.044 Available Spare Threshold: 0% 00:24:51.044 Life Percentage Used:[2024-07-24 19:01:35.984044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.984051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x826ec0) 00:24:51.044 [2024-07-24 19:01:35.984060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.044 [2024-07-24 19:01:35.984076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa8c0, cid 7, qid 0 00:24:51.044 [2024-07-24 19:01:35.984260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.044 [2024-07-24 19:01:35.984269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.044 [2024-07-24 19:01:35.984273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.984278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa8c0) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.984314] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:51.044 [2024-07-24 19:01:35.984326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9e40) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.984333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.044 [2024-07-24 19:01:35.984340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8a9fc0) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.984346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.044 [2024-07-24 19:01:35.984352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa140) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.984358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.044 [2024-07-24 19:01:35.984364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.984370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.044 [2024-07-24 19:01:35.984381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.984386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.984391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.044 [2024-07-24 19:01:35.984399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.044 [2024-07-24 19:01:35.984415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.044 [2024-07-24 19:01:35.984563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.044 [2024-07-24 19:01:35.984573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.044 [2024-07-24 19:01:35.984577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.984582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.984592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.984597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.984609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.044 [2024-07-24 19:01:35.984618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.044 [2024-07-24 19:01:35.984636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.044 [2024-07-24 19:01:35.984814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.044 [2024-07-24 19:01:35.984824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.044 [2024-07-24 19:01:35.984829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.984833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.984839] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:51.044 [2024-07-24 19:01:35.984844] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:51.044 [2024-07-24 19:01:35.984857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.984863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.984868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.044 [2024-07-24 19:01:35.984876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.044 [2024-07-24 19:01:35.984890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.044 [2024-07-24 19:01:35.985054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.044 [2024-07-24 19:01:35.985063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.044 [2024-07-24 19:01:35.985068] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.985086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.044 [2024-07-24 19:01:35.985105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.044 [2024-07-24 19:01:35.985120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.044 [2024-07-24 19:01:35.985279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.044 [2024-07-24 19:01:35.985290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.044 [2024-07-24 19:01:35.985297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.985315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.044 [2024-07-24 19:01:35.985335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.044 [2024-07-24 19:01:35.985349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.044 [2024-07-24 19:01:35.985491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.044 [2024-07-24 19:01:35.985500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.044 [2024-07-24 19:01:35.985506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.985522] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.044 [2024-07-24 19:01:35.985540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.044 [2024-07-24 19:01:35.985553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.044 [2024-07-24 19:01:35.985724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.044 [2024-07-24 19:01:35.985734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.044 [2024-07-24 19:01:35.985738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.985755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.044 [2024-07-24 19:01:35.985773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.044 [2024-07-24 19:01:35.985787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.044 [2024-07-24 19:01:35.985934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.044 [2024-07-24 19:01:35.985944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.044 [2024-07-24 19:01:35.985948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.044 [2024-07-24 19:01:35.985965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.044 [2024-07-24 19:01:35.985976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.045 [2024-07-24 19:01:35.985985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.045 [2024-07-24 19:01:35.985999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.045 [2024-07-24 19:01:35.986163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.045 [2024-07-24 19:01:35.986172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.045 [2024-07-24 19:01:35.986176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.045 [2024-07-24 19:01:35.986184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.045 [2024-07-24 19:01:35.986197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.045 [2024-07-24 19:01:35.986202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.045 [2024-07-24 19:01:35.986206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.045 [2024-07-24 19:01:35.986214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.045 [2024-07-24 19:01:35.986228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.045 [2024-07-24 19:01:35.986378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.045 [2024-07-24 19:01:35.986387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.045 [2024-07-24 19:01:35.986391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.045 [2024-07-24 19:01:35.986396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.045 [2024-07-24 19:01:35.986407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.045 [2024-07-24 19:01:35.986413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.045 [2024-07-24 19:01:35.986417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.045 [2024-07-24 19:01:35.986425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.045 [2024-07-24 19:01:35.986438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.045 [2024-07-24 19:01:35.986597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.045 [2024-07-24 19:01:35.990617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.045 [2024-07-24 19:01:35.990624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.045 [2024-07-24 19:01:35.990629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.045 [2024-07-24 19:01:35.990643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.045 [2024-07-24 19:01:35.990649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.045 [2024-07-24 19:01:35.990653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x826ec0) 00:24:51.045 [2024-07-24 19:01:35.990662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.045 [2024-07-24 19:01:35.990677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8aa2c0, cid 3, qid 0 00:24:51.045 [2024-07-24 19:01:35.990834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.045 [2024-07-24 19:01:35.990842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.045 [2024-07-24 19:01:35.990847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.045 [2024-07-24 19:01:35.990852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8aa2c0) on tqpair=0x826ec0 00:24:51.045 [2024-07-24 19:01:35.990860] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:24:51.045 0% 00:24:51.045 Data Units Read: 0 00:24:51.045 Data Units Written: 0 00:24:51.045 Host Read Commands: 0 00:24:51.045 Host Write Commands: 0 00:24:51.045 Controller Busy Time: 0 minutes 00:24:51.045 Power Cycles: 0 00:24:51.045 Power On Hours: 0 hours 00:24:51.045 Unsafe Shutdowns: 0 00:24:51.045 Unrecoverable Media Errors: 0 00:24:51.045 Lifetime Error Log Entries: 0 00:24:51.045 Warning Temperature Time: 0 minutes 00:24:51.045 Critical Temperature Time: 0 minutes 00:24:51.045 00:24:51.045 Number of Queues 00:24:51.045 ================ 00:24:51.045 Number of I/O Submission Queues: 127 00:24:51.045 Number of I/O Completion Queues: 127 00:24:51.045 00:24:51.045 Active Namespaces 00:24:51.045 ================= 00:24:51.045 Namespace ID:1 00:24:51.045 Error Recovery Timeout: Unlimited 00:24:51.045 Command Set Identifier: NVM (00h) 00:24:51.045 Deallocate: Supported 00:24:51.045 Deallocated/Unwritten Error: Not Supported 00:24:51.045 Deallocated Read Value: Unknown 00:24:51.045 Deallocate in Write Zeroes: Not Supported 00:24:51.045 Deallocated Guard Field: 0xFFFF 00:24:51.045 Flush: Supported 00:24:51.045 Reservation: Supported 00:24:51.045 Namespace Sharing Capabilities: Multiple Controllers 00:24:51.045 Size (in LBAs): 131072 (0GiB) 00:24:51.045 Capacity (in LBAs): 131072 (0GiB) 00:24:51.045 Utilization (in LBAs): 131072 (0GiB) 00:24:51.045 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:51.045 EUI64: ABCDEF0123456789 00:24:51.045 UUID: e3878ac2-7bdc-41cb-9a96-1bb72b6c1661 00:24:51.045 Thin Provisioning: Not Supported 00:24:51.045 Per-NS Atomic Units: Yes 00:24:51.045 Atomic Boundary Size (Normal): 0 00:24:51.045 Atomic Boundary Size (PFail): 0 00:24:51.045 Atomic Boundary Offset: 0 00:24:51.045 Maximum Single Source Range Length: 65535 00:24:51.045 Maximum Copy Length: 65535 00:24:51.045 Maximum Source Range Count: 1 00:24:51.045 NGUID/EUI64 Never Reused: No 00:24:51.045 Namespace Write Protected: No 00:24:51.045 Number of LBA Formats: 1 00:24:51.045 Current LBA Format: LBA Format #00 00:24:51.045 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:51.045 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:51.045 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:51.045 rmmod nvme_tcp 00:24:51.304 rmmod nvme_fabrics 00:24:51.304 rmmod nvme_keyring 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2605026 ']' 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2605026 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2605026 ']' 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2605026 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2605026 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2605026' 00:24:51.304 killing process with pid 2605026 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2605026 00:24:51.304 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2605026 00:24:51.562 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.562 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.562 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.562 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.562 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.562 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.562 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.562 19:01:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.465 19:01:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:53.465 00:24:53.465 real 0m9.240s 00:24:53.465 user 0m5.259s 00:24:53.465 sys 0m4.878s 00:24:53.465 19:01:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.465 19:01:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:53.465 ************************************ 00:24:53.465 END TEST nvmf_identify 00:24:53.465 ************************************ 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.723 ************************************ 00:24:53.723 START TEST nvmf_perf 00:24:53.723 ************************************ 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:53.723 * Looking for test storage... 00:24:53.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.723 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.724 19:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:00.291 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:00.291 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.291 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:00.292 Found net devices under 0000:af:00.0: cvl_0_0 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:00.292 Found net devices under 0000:af:00.1: cvl_0_1 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:25:00.292 00:25:00.292 --- 10.0.0.2 ping statistics --- 00:25:00.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.292 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:25:00.292 00:25:00.292 --- 10.0.0.1 ping statistics --- 00:25:00.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.292 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2608764 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2608764 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2608764 ']' 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.292 19:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:00.292 [2024-07-24 19:01:44.543825] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:25:00.292 [2024-07-24 19:01:44.543881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.292 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.292 [2024-07-24 19:01:44.629954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:00.292 [2024-07-24 19:01:44.723331] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.292 [2024-07-24 19:01:44.723375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.292 [2024-07-24 19:01:44.723386] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.292 [2024-07-24 19:01:44.723395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.292 [2024-07-24 19:01:44.723402] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.292 [2024-07-24 19:01:44.723454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.292 [2024-07-24 19:01:44.723589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.292 [2024-07-24 19:01:44.723700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.292 [2024-07-24 19:01:44.723700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.551 19:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.551 19:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:25:00.551 19:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.551 19:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:00.551 19:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:00.551 19:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.551 19:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:00.551 19:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:03.868 19:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:03.868 19:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:04.127 19:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:25:04.127 19:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:04.385 19:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:04.385 19:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:25:04.385 19:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:04.385 19:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:04.385 19:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:04.644 [2024-07-24 19:01:49.422859] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.644 19:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:04.902 19:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:04.902 19:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:05.160 19:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:05.160 19:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:05.419 19:01:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.677 [2024-07-24 19:01:50.464798] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.677 19:01:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:05.936 19:01:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:25:05.936 19:01:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:25:05.936 19:01:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:05.936 19:01:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:25:07.310 Initializing NVMe Controllers 00:25:07.310 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:25:07.310 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:25:07.310 Initialization complete. Launching workers. 00:25:07.310 ======================================================== 00:25:07.310 Latency(us) 00:25:07.310 Device Information : IOPS MiB/s Average min max 00:25:07.310 PCIE (0000:86:00.0) NSID 1 from core 0: 69239.59 270.47 461.41 54.22 4528.98 00:25:07.310 ======================================================== 00:25:07.310 Total : 69239.59 270.47 461.41 54.22 4528.98 00:25:07.310 00:25:07.310 19:01:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.310 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.687 Initializing NVMe Controllers 00:25:08.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:08.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:08.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:08.687 Initialization complete. Launching workers. 00:25:08.687 ======================================================== 00:25:08.687 Latency(us) 00:25:08.687 Device Information : IOPS MiB/s Average min max 00:25:08.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 69.00 0.27 15068.47 268.48 45063.14 00:25:08.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21836.93 7947.23 47904.61 00:25:08.687 ======================================================== 00:25:08.687 Total : 115.00 0.45 17775.86 268.48 47904.61 00:25:08.687 00:25:08.687 19:01:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:08.687 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.064 Initializing NVMe Controllers 00:25:10.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:10.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:10.064 Initialization complete. Launching workers. 00:25:10.064 ======================================================== 00:25:10.064 Latency(us) 00:25:10.064 Device Information : IOPS MiB/s Average min max 00:25:10.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4369.99 17.07 7360.41 1117.24 12818.23 00:25:10.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3921.99 15.32 8205.57 6685.92 15848.96 00:25:10.064 ======================================================== 00:25:10.064 Total : 8291.98 32.39 7760.16 1117.24 15848.96 00:25:10.064 00:25:10.064 19:01:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:10.064 19:01:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:10.064 19:01:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:10.064 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.598 Initializing NVMe Controllers 00:25:12.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.598 Controller IO queue size 128, less than required. 00:25:12.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:12.598 Controller IO queue size 128, less than required. 00:25:12.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:12.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:12.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:12.598 Initialization complete. Launching workers. 00:25:12.598 ======================================================== 00:25:12.598 Latency(us) 00:25:12.598 Device Information : IOPS MiB/s Average min max 00:25:12.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1116.05 279.01 117411.84 71160.84 195295.96 00:25:12.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 541.06 135.26 245959.43 65079.13 378009.22 00:25:12.598 ======================================================== 00:25:12.598 Total : 1657.11 414.28 159383.41 65079.13 378009.22 00:25:12.598 00:25:12.598 19:01:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:12.598 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.857 No valid NVMe controllers or AIO or URING devices found 00:25:12.857 Initializing NVMe Controllers 00:25:12.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.857 Controller IO queue size 128, less than required. 00:25:12.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:12.857 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:12.857 Controller IO queue size 128, less than required. 00:25:12.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:12.857 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:12.857 WARNING: Some requested NVMe devices were skipped 00:25:12.858 19:01:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:12.858 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.394 Initializing NVMe Controllers 00:25:15.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.394 Controller IO queue size 128, less than required. 00:25:15.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:15.394 Controller IO queue size 128, less than required. 00:25:15.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:15.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:15.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:15.394 Initialization complete. Launching workers. 00:25:15.394 00:25:15.394 ==================== 00:25:15.394 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:15.394 TCP transport: 00:25:15.394 polls: 14172 00:25:15.394 idle_polls: 5519 00:25:15.394 sock_completions: 8653 00:25:15.394 nvme_completions: 4513 00:25:15.394 submitted_requests: 6694 00:25:15.394 queued_requests: 1 00:25:15.394 00:25:15.394 ==================== 00:25:15.394 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:15.394 TCP transport: 00:25:15.394 polls: 15356 00:25:15.394 idle_polls: 7205 00:25:15.394 sock_completions: 8151 00:25:15.394 nvme_completions: 4749 00:25:15.394 submitted_requests: 7112 00:25:15.394 queued_requests: 1 00:25:15.394 ======================================================== 00:25:15.394 Latency(us) 00:25:15.394 Device Information : IOPS MiB/s Average min max 00:25:15.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1127.93 281.98 116100.32 56089.13 176507.99 00:25:15.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1186.93 296.73 110185.31 41292.84 162683.00 00:25:15.394 ======================================================== 00:25:15.394 Total : 2314.86 578.71 113067.44 41292.84 176507.99 00:25:15.394 00:25:15.394 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:15.394 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:15.653 rmmod nvme_tcp 00:25:15.653 rmmod nvme_fabrics 00:25:15.653 rmmod nvme_keyring 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2608764 ']' 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2608764 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2608764 ']' 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2608764 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2608764 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2608764' 00:25:15.653 killing process with pid 2608764 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2608764 00:25:15.653 19:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2608764 00:25:17.558 19:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:17.558 19:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:17.558 19:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:17.558 19:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.558 19:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:17.558 19:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.558 19:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.558 19:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.496 19:02:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:19.496 00:25:19.496 real 0m25.784s 00:25:19.496 user 1m10.839s 00:25:19.496 sys 0m7.464s 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:19.497 ************************************ 00:25:19.497 END TEST nvmf_perf 00:25:19.497 ************************************ 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.497 ************************************ 00:25:19.497 START TEST nvmf_fio_host 00:25:19.497 ************************************ 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:19.497 * Looking for test storage... 00:25:19.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.497 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.757 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:19.757 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:19.757 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:19.757 19:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:25.035 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:25.035 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:25.035 Found net devices under 0000:af:00.0: cvl_0_0 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:25.035 Found net devices under 0000:af:00.1: cvl_0_1 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:25.035 19:02:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.035 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.035 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:25.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:25:25.295 00:25:25.295 --- 10.0.0.2 ping statistics --- 00:25:25.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.295 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:25:25.295 00:25:25.295 --- 10.0.0.1 ping statistics --- 00:25:25.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.295 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2615433 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2615433 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2615433 ']' 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.295 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.295 [2024-07-24 19:02:10.157893] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:25:25.295 [2024-07-24 19:02:10.157948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.295 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.295 [2024-07-24 19:02:10.246669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:25.554 [2024-07-24 19:02:10.341843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.554 [2024-07-24 19:02:10.341879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.554 [2024-07-24 19:02:10.341889] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.554 [2024-07-24 19:02:10.341898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.554 [2024-07-24 19:02:10.341906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.554 [2024-07-24 19:02:10.341955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.554 [2024-07-24 19:02:10.341998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.554 [2024-07-24 19:02:10.342033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.554 [2024-07-24 19:02:10.342033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:25.554 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.554 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:25.554 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:25.812 [2024-07-24 19:02:10.682373] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.812 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:25.812 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:25.812 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.812 19:02:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:26.071 Malloc1 00:25:26.071 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:26.329 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:26.588 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.847 [2024-07-24 19:02:11.698074] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.847 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:27.105 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:27.105 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:27.105 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:27.105 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:25:27.105 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:27.105 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:25:27.105 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.105 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:25:27.105 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:25:27.105 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:27.106 19:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:27.364 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:27.364 fio-3.35 00:25:27.364 Starting 1 thread 00:25:27.364 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.897 00:25:29.897 test: (groupid=0, jobs=1): err= 0: pid=2616025: Wed Jul 24 19:02:14 2024 00:25:29.897 read: IOPS=3695, BW=14.4MiB/s (15.1MB/s)(29.1MiB/2013msec) 00:25:29.897 slat (usec): min=2, max=819, avg= 3.01, stdev=10.73 00:25:29.897 clat (usec): min=5304, max=32951, avg=18739.13, stdev=1922.40 00:25:29.897 lat (usec): min=5332, max=32954, avg=18742.14, stdev=1921.42 00:25:29.897 clat percentiles (usec): 00:25:29.897 | 1.00th=[14746], 5.00th=[16188], 10.00th=[16712], 20.00th=[17171], 00:25:29.897 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18744], 60.00th=[19006], 00:25:29.897 | 70.00th=[19530], 80.00th=[20317], 90.00th=[20841], 95.00th=[21627], 00:25:29.897 | 99.00th=[22676], 99.50th=[24773], 99.90th=[32637], 99.95th=[32900], 00:25:29.897 | 99.99th=[32900] 00:25:29.897 bw ( KiB/s): min=14192, max=15216, per=99.64%, avg=14730.00, stdev=421.27, samples=4 00:25:29.897 iops : min= 3548, max= 3804, avg=3682.50, stdev=105.32, samples=4 00:25:29.898 write: IOPS=3715, BW=14.5MiB/s (15.2MB/s)(29.2MiB/2013msec); 0 zone resets 00:25:29.898 slat (usec): min=2, max=232, avg= 3.11, stdev= 4.72 00:25:29.898 clat (usec): min=3463, max=28753, avg=15586.68, stdev=1530.69 00:25:29.898 lat (usec): min=3478, max=28756, avg=15589.79, stdev=1530.11 00:25:29.898 clat percentiles (usec): 00:25:29.898 | 1.00th=[12125], 5.00th=[13566], 10.00th=[13960], 20.00th=[14615], 00:25:29.898 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15664], 60.00th=[15926], 00:25:29.898 | 70.00th=[16319], 80.00th=[16581], 90.00th=[17171], 95.00th=[17695], 00:25:29.898 | 99.00th=[18482], 99.50th=[19268], 99.90th=[28181], 99.95th=[28443], 00:25:29.898 | 99.99th=[28705] 00:25:29.898 bw ( KiB/s): min=14592, max=15040, per=99.95%, avg=14854.00, stdev=188.79, samples=4 00:25:29.898 iops : min= 3648, max= 3760, avg=3713.50, stdev=47.20, samples=4 00:25:29.898 lat (msec) : 4=0.05%, 10=0.42%, 20=87.24%, 50=12.29% 00:25:29.898 cpu : usr=69.28%, sys=26.49%, ctx=322, majf=0, minf=5 00:25:29.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:29.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:29.898 issued rwts: total=7440,7479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:29.898 00:25:29.898 Run status group 0 (all jobs): 00:25:29.898 READ: bw=14.4MiB/s (15.1MB/s), 14.4MiB/s-14.4MiB/s (15.1MB/s-15.1MB/s), io=29.1MiB (30.5MB), run=2013-2013msec 00:25:29.898 WRITE: bw=14.5MiB/s (15.2MB/s), 14.5MiB/s-14.5MiB/s (15.2MB/s-15.2MB/s), io=29.2MiB (30.6MB), run=2013-2013msec 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:29.898 19:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:30.157 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:30.157 fio-3.35 00:25:30.157 Starting 1 thread 00:25:30.415 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.961 00:25:32.961 test: (groupid=0, jobs=1): err= 0: pid=2616660: Wed Jul 24 19:02:17 2024 00:25:32.961 read: IOPS=4482, BW=70.0MiB/s (73.4MB/s)(140MiB/2006msec) 00:25:32.961 slat (usec): min=3, max=125, avg= 4.26, stdev= 1.55 00:25:32.961 clat (usec): min=4423, max=36576, avg=16125.17, stdev=5069.18 00:25:32.961 lat (usec): min=4427, max=36580, avg=16129.43, stdev=5069.17 00:25:32.961 clat percentiles (usec): 00:25:32.961 | 1.00th=[ 5866], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[10421], 00:25:32.961 | 30.00th=[13960], 40.00th=[15926], 50.00th=[17171], 60.00th=[18220], 00:25:32.961 | 70.00th=[19006], 80.00th=[19792], 90.00th=[21627], 95.00th=[23725], 00:25:32.961 | 99.00th=[27919], 99.50th=[29492], 99.90th=[33162], 99.95th=[34341], 00:25:32.961 | 99.99th=[36439] 00:25:32.961 bw ( KiB/s): min=30816, max=46208, per=50.91%, avg=36512.00, stdev=6714.05, samples=4 00:25:32.961 iops : min= 1926, max= 2888, avg=2282.00, stdev=419.63, samples=4 00:25:32.961 write: IOPS=2718, BW=42.5MiB/s (44.5MB/s)(75.7MiB/1782msec); 0 zone resets 00:25:32.961 slat (usec): min=45, max=317, avg=47.29, stdev= 7.13 00:25:32.961 clat (usec): min=5430, max=47898, avg=22150.86, stdev=7584.88 00:25:32.961 lat (usec): min=5476, max=47944, avg=22198.14, stdev=7584.57 00:25:32.961 clat percentiles (usec): 00:25:32.961 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[11076], 20.00th=[12780], 00:25:32.961 | 30.00th=[15533], 40.00th=[22414], 50.00th=[24773], 60.00th=[26346], 00:25:32.961 | 70.00th=[27395], 80.00th=[28705], 90.00th=[30016], 95.00th=[31327], 00:25:32.961 | 99.00th=[37487], 99.50th=[40633], 99.90th=[47449], 99.95th=[47449], 00:25:32.961 | 99.99th=[47973] 00:25:32.961 bw ( KiB/s): min=33344, max=48640, per=88.00%, avg=38272.00, stdev=6991.69, samples=4 00:25:32.961 iops : min= 2084, max= 3040, avg=2392.00, stdev=436.98, samples=4 00:25:32.961 lat (msec) : 10=12.46%, 20=53.63%, 50=33.91% 00:25:32.961 cpu : usr=81.65%, sys=16.71%, ctx=67, majf=0, minf=2 00:25:32.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:32.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:32.961 issued rwts: total=8991,4844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:32.961 00:25:32.961 Run status group 0 (all jobs): 00:25:32.961 READ: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=140MiB (147MB), run=2006-2006msec 00:25:32.961 WRITE: bw=42.5MiB/s (44.5MB/s), 42.5MiB/s-42.5MiB/s (44.5MB/s-44.5MB/s), io=75.7MiB (79.4MB), run=1782-1782msec 00:25:32.961 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.961 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:32.961 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:32.961 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:33.219 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:33.220 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:33.220 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:33.220 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:33.220 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:33.220 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:33.220 19:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:33.220 rmmod nvme_tcp 00:25:33.220 rmmod nvme_fabrics 00:25:33.220 rmmod nvme_keyring 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2615433 ']' 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2615433 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2615433 ']' 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2615433 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2615433 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2615433' 00:25:33.220 killing process with pid 2615433 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2615433 00:25:33.220 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2615433 00:25:33.478 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:33.478 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:33.478 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:33.478 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:33.478 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:33.478 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.478 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.478 19:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.381 19:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:35.381 00:25:35.381 real 0m16.028s 00:25:35.381 user 1m1.461s 00:25:35.381 sys 0m6.429s 00:25:35.381 19:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:35.381 19:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.381 ************************************ 00:25:35.381 END TEST nvmf_fio_host 00:25:35.381 ************************************ 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.673 ************************************ 00:25:35.673 START TEST nvmf_failover 00:25:35.673 ************************************ 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:35.673 * Looking for test storage... 00:25:35.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.673 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:35.674 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:35.674 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:35.674 19:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:42.243 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.243 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:42.243 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:42.244 Found net devices under 0000:af:00.0: cvl_0_0 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:42.244 Found net devices under 0000:af:00.1: cvl_0_1 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:42.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:25:42.244 00:25:42.244 --- 10.0.0.2 ping statistics --- 00:25:42.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.244 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:25:42.244 00:25:42.244 --- 10.0.0.1 ping statistics --- 00:25:42.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.244 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2620734 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2620734 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2620734 ']' 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.244 19:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:42.244 [2024-07-24 19:02:26.457569] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:25:42.244 [2024-07-24 19:02:26.457631] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.244 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.244 [2024-07-24 19:02:26.544379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:42.244 [2024-07-24 19:02:26.648679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.244 [2024-07-24 19:02:26.648724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.244 [2024-07-24 19:02:26.648737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.244 [2024-07-24 19:02:26.648748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.244 [2024-07-24 19:02:26.648758] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.244 [2024-07-24 19:02:26.648821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.244 [2024-07-24 19:02:26.649066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.244 [2024-07-24 19:02:26.649069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.505 19:02:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.505 19:02:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:42.505 19:02:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:42.506 19:02:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.506 19:02:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:42.506 19:02:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.506 19:02:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:42.764 [2024-07-24 19:02:27.683699] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.764 19:02:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:43.023 Malloc0 00:25:43.023 19:02:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:43.282 19:02:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:43.540 19:02:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.799 [2024-07-24 19:02:28.760464] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.799 19:02:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:44.057 [2024-07-24 19:02:29.017432] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:44.057 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:44.316 [2024-07-24 19:02:29.278523] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:44.316 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2621230 00:25:44.316 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:44.316 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:44.316 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2621230 /var/tmp/bdevperf.sock 00:25:44.316 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2621230 ']' 00:25:44.316 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:44.316 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:44.316 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:44.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:44.316 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:44.316 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:44.884 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:44.884 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:44.884 19:02:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:45.142 NVMe0n1 00:25:45.142 19:02:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:45.401 00:25:45.401 19:02:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:45.401 19:02:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2621319 00:25:45.401 19:02:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:46.777 19:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.777 [2024-07-24 19:02:31.611749] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.611846] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.611869] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.611889] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.611908] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.611927] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.611946] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.611964] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.611983] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.612002] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.612020] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.612039] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.612057] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.612075] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.612093] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.612111] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.612130] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.777 [2024-07-24 19:02:31.612149] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612167] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612185] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612203] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612231] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612250] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612268] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612286] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612305] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612323] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612342] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612360] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612378] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612397] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612415] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612433] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612452] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612472] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612490] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612509] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612528] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612546] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612564] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612583] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612610] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612629] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612648] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612666] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612684] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612703] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612721] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612744] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612763] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612781] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612799] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612818] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612835] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612854] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612872] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612891] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612909] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612927] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612946] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612964] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.612982] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613000] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613020] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613039] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613058] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613077] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613095] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613114] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613132] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613152] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613170] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613188] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613206] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613224] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613246] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613265] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613282] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613301] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613319] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613337] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613356] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613374] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613392] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613411] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613429] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613447] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613467] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613484] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613503] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613521] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613539] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613558] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613575] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613593] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613620] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.778 [2024-07-24 19:02:31.613639] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.779 [2024-07-24 19:02:31.613657] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.779 [2024-07-24 19:02:31.613676] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.779 [2024-07-24 19:02:31.613695] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.779 [2024-07-24 19:02:31.613714] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.779 [2024-07-24 19:02:31.613732] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99460 is same with the state(6) to be set 00:25:46.779 19:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:50.065 19:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:50.065 00:25:50.065 19:02:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:50.324 19:02:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:53.617 19:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.617 [2024-07-24 19:02:38.502638] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.617 19:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:54.553 19:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:54.814 [2024-07-24 19:02:39.776270] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9afd0 is same with the state(6) to be set 00:25:54.814 [2024-07-24 19:02:39.776323] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9afd0 is same with the state(6) to be set 00:25:54.814 [2024-07-24 19:02:39.776336] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9afd0 is same with the state(6) to be set 00:25:54.814 [2024-07-24 19:02:39.776348] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9afd0 is same with the state(6) to be set 00:25:54.814 [2024-07-24 19:02:39.776358] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9afd0 is same with the state(6) to be set 00:25:54.814 [2024-07-24 19:02:39.776370] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9afd0 is same with the state(6) to be set 00:25:54.814 19:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2621319 00:26:01.385 0 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2621230 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2621230 ']' 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2621230 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2621230 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2621230' 00:26:01.385 killing process with pid 2621230 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2621230 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2621230 00:26:01.385 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:01.385 [2024-07-24 19:02:29.359647] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:26:01.385 [2024-07-24 19:02:29.359714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2621230 ] 00:26:01.385 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.385 [2024-07-24 19:02:29.440521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.385 [2024-07-24 19:02:29.529137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.385 Running I/O for 15 seconds... 00:26:01.385 [2024-07-24 19:02:31.614553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.385 [2024-07-24 19:02:31.614597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.385 [2024-07-24 19:02:31.614623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.385 [2024-07-24 19:02:31.614635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.385 [2024-07-24 19:02:31.614648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.385 [2024-07-24 19:02:31.614659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.385 [2024-07-24 19:02:31.614672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.385 [2024-07-24 19:02:31.614681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.385 [2024-07-24 19:02:31.614693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.385 [2024-07-24 19:02:31.614703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.385 [2024-07-24 19:02:31.614715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.385 [2024-07-24 19:02:31.614724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.385 [2024-07-24 19:02:31.614736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.385 [2024-07-24 19:02:31.614745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.385 [2024-07-24 19:02:31.614757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.614986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.614998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.386 [2024-07-24 19:02:31.615458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.386 [2024-07-24 19:02:31.615479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.386 [2024-07-24 19:02:31.615585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.386 [2024-07-24 19:02:31.615596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.387 [2024-07-24 19:02:31.615979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.615992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.387 [2024-07-24 19:02:31.616441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.387 [2024-07-24 19:02:31.616450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.616981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.616991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.617003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.388 [2024-07-24 19:02:31.617014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.617039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.388 [2024-07-24 19:02:31.617050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34232 len:8 PRP1 0x0 PRP2 0x0 00:26:01.388 [2024-07-24 19:02:31.617059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.617071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.388 [2024-07-24 19:02:31.617079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.388 [2024-07-24 19:02:31.617087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34240 len:8 PRP1 0x0 PRP2 0x0 00:26:01.388 [2024-07-24 19:02:31.617096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.617105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.388 [2024-07-24 19:02:31.617113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.388 [2024-07-24 19:02:31.617120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34248 len:8 PRP1 0x0 PRP2 0x0 00:26:01.388 [2024-07-24 19:02:31.617130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.617139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.388 [2024-07-24 19:02:31.617147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.388 [2024-07-24 19:02:31.617155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34256 len:8 PRP1 0x0 PRP2 0x0 00:26:01.388 [2024-07-24 19:02:31.617164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.617173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.388 [2024-07-24 19:02:31.617181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.388 [2024-07-24 19:02:31.617188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34264 len:8 PRP1 0x0 PRP2 0x0 00:26:01.388 [2024-07-24 19:02:31.617197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.617207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.388 [2024-07-24 19:02:31.617214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.388 [2024-07-24 19:02:31.617221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34272 len:8 PRP1 0x0 PRP2 0x0 00:26:01.388 [2024-07-24 19:02:31.617232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.617242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.388 [2024-07-24 19:02:31.617249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.388 [2024-07-24 19:02:31.617257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34280 len:8 PRP1 0x0 PRP2 0x0 00:26:01.388 [2024-07-24 19:02:31.617266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.388 [2024-07-24 19:02:31.617275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.389 [2024-07-24 19:02:31.617282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.389 [2024-07-24 19:02:31.617290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34288 len:8 PRP1 0x0 PRP2 0x0 00:26:01.389 [2024-07-24 19:02:31.617301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.617311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.389 [2024-07-24 19:02:31.617319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.389 [2024-07-24 19:02:31.617327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34296 len:8 PRP1 0x0 PRP2 0x0 00:26:01.389 [2024-07-24 19:02:31.617336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.617346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.389 [2024-07-24 19:02:31.617353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.389 [2024-07-24 19:02:31.617361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34304 len:8 PRP1 0x0 PRP2 0x0 00:26:01.389 [2024-07-24 19:02:31.617371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.617380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.389 [2024-07-24 19:02:31.617388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.389 [2024-07-24 19:02:31.617395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34312 len:8 PRP1 0x0 PRP2 0x0 00:26:01.389 [2024-07-24 19:02:31.617405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.617414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.389 [2024-07-24 19:02:31.617421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.389 [2024-07-24 19:02:31.617429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34320 len:8 PRP1 0x0 PRP2 0x0 00:26:01.389 [2024-07-24 19:02:31.617438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.617447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.389 [2024-07-24 19:02:31.617454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.389 [2024-07-24 19:02:31.617462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34328 len:8 PRP1 0x0 PRP2 0x0 00:26:01.389 [2024-07-24 19:02:31.617471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.617480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.389 [2024-07-24 19:02:31.617488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.389 [2024-07-24 19:02:31.617496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34336 len:8 PRP1 0x0 PRP2 0x0 00:26:01.389 [2024-07-24 19:02:31.617505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.628171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.389 [2024-07-24 19:02:31.628188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.389 [2024-07-24 19:02:31.628200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34344 len:8 PRP1 0x0 PRP2 0x0 00:26:01.389 [2024-07-24 19:02:31.628213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.628226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.389 [2024-07-24 19:02:31.628236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.389 [2024-07-24 19:02:31.628250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34352 len:8 PRP1 0x0 PRP2 0x0 00:26:01.389 [2024-07-24 19:02:31.628263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.628318] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c1f160 was disconnected and freed. reset controller. 00:26:01.389 [2024-07-24 19:02:31.628335] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:01.389 [2024-07-24 19:02:31.628369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.389 [2024-07-24 19:02:31.628384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.628399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.389 [2024-07-24 19:02:31.628412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.628426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.389 [2024-07-24 19:02:31.628439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.628454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.389 [2024-07-24 19:02:31.628466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:31.628480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.389 [2024-07-24 19:02:31.628538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffa30 (9): Bad file descriptor 00:26:01.389 [2024-07-24 19:02:31.634392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.389 [2024-07-24 19:02:31.754906] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:01.389 [2024-07-24 19:02:35.240415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.389 [2024-07-24 19:02:35.240847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.389 [2024-07-24 19:02:35.240857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.240870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.240880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.240892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.240902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.240914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.240923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.240934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.240944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.240956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.240965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.240977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.240987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.240998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.241008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.241030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.241051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.241072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.241092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.241113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.390 [2024-07-24 19:02:35.241136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.390 [2024-07-24 19:02:35.241530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.390 [2024-07-24 19:02:35.241540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.241984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.241994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.242014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.242035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.242056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.242077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.242098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.242118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.242139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.391 [2024-07-24 19:02:35.242161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.391 [2024-07-24 19:02:35.242206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101344 len:8 PRP1 0x0 PRP2 0x0 00:26:01.391 [2024-07-24 19:02:35.242215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.391 [2024-07-24 19:02:35.242237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.391 [2024-07-24 19:02:35.242245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101352 len:8 PRP1 0x0 PRP2 0x0 00:26:01.391 [2024-07-24 19:02:35.242254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.391 [2024-07-24 19:02:35.242271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.391 [2024-07-24 19:02:35.242279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101360 len:8 PRP1 0x0 PRP2 0x0 00:26:01.391 [2024-07-24 19:02:35.242288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.391 [2024-07-24 19:02:35.242305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.391 [2024-07-24 19:02:35.242313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101368 len:8 PRP1 0x0 PRP2 0x0 00:26:01.391 [2024-07-24 19:02:35.242322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.391 [2024-07-24 19:02:35.242338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.391 [2024-07-24 19:02:35.242346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101376 len:8 PRP1 0x0 PRP2 0x0 00:26:01.391 [2024-07-24 19:02:35.242355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.391 [2024-07-24 19:02:35.242372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.391 [2024-07-24 19:02:35.242380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101384 len:8 PRP1 0x0 PRP2 0x0 00:26:01.391 [2024-07-24 19:02:35.242389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.391 [2024-07-24 19:02:35.242399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.391 [2024-07-24 19:02:35.242406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101392 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101400 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101408 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101416 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101424 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101432 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101440 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101448 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101456 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101464 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101472 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101480 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101488 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101496 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101504 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101512 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101520 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.242973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.242982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.242989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.242997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101528 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.243006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.243015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.243022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.243034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.243044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.243055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.243063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.243070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101544 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.243079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.243089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.243096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.243104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101552 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.243113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.243122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.243129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.243137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101560 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.243147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.243158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.243165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.243173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101568 len:8 PRP1 0x0 PRP2 0x0 00:26:01.392 [2024-07-24 19:02:35.243182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.392 [2024-07-24 19:02:35.243192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.392 [2024-07-24 19:02:35.243200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.392 [2024-07-24 19:02:35.243207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101576 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101592 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101600 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101608 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101616 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101624 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101632 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101640 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101648 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101656 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101664 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.243624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101672 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.243633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.243643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.243650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.254160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101680 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.254181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.254205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.254216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101688 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.254228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.254252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.254263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101696 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.254275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.254298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.254309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101704 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.254321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.254344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.254355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101712 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.254367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.254390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.254403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100944 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.254416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.393 [2024-07-24 19:02:35.254440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.393 [2024-07-24 19:02:35.254450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100952 len:8 PRP1 0x0 PRP2 0x0 00:26:01.393 [2024-07-24 19:02:35.254463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254520] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c21110 was disconnected and freed. reset controller. 00:26:01.393 [2024-07-24 19:02:35.254535] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:01.393 [2024-07-24 19:02:35.254568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.393 [2024-07-24 19:02:35.254583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.393 [2024-07-24 19:02:35.254617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.393 [2024-07-24 19:02:35.254644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.393 [2024-07-24 19:02:35.254670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.393 [2024-07-24 19:02:35.254683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.393 [2024-07-24 19:02:35.254734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffa30 (9): Bad file descriptor 00:26:01.393 [2024-07-24 19:02:35.260557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.393 [2024-07-24 19:02:35.304750] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:01.393 [2024-07-24 19:02:39.776661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.776989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.776998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.394 [2024-07-24 19:02:39.777408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.394 [2024-07-24 19:02:39.777420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.395 [2024-07-24 19:02:39.777429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.395 [2024-07-24 19:02:39.777450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.395 [2024-07-24 19:02:39.777471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.395 [2024-07-24 19:02:39.777492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.395 [2024-07-24 19:02:39.777514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.395 [2024-07-24 19:02:39.777534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.777983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.777992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.395 [2024-07-24 19:02:39.778227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.395 [2024-07-24 19:02:39.778249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.395 [2024-07-24 19:02:39.778260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.395 [2024-07-24 19:02:39.778270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.778981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.778993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.779002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.779013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.779023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.779034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.779044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.779056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.779065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.779077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.779086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.779098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.396 [2024-07-24 19:02:39.779107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.396 [2024-07-24 19:02:39.779118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.397 [2024-07-24 19:02:39.779128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.397 [2024-07-24 19:02:39.779150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.397 [2024-07-24 19:02:39.779170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.397 [2024-07-24 19:02:39.779191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.397 [2024-07-24 19:02:39.779212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.397 [2024-07-24 19:02:39.779235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.397 [2024-07-24 19:02:39.779255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.397 [2024-07-24 19:02:39.779276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.397 [2024-07-24 19:02:39.779297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.397 [2024-07-24 19:02:39.779319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.397 [2024-07-24 19:02:39.779340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.397 [2024-07-24 19:02:39.779361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.397 [2024-07-24 19:02:39.779382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.397 [2024-07-24 19:02:39.779402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.397 [2024-07-24 19:02:39.779424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.397 [2024-07-24 19:02:39.779457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.397 [2024-07-24 19:02:39.779466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66032 len:8 PRP1 0x0 PRP2 0x0 00:26:01.397 [2024-07-24 19:02:39.779476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779525] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c23b50 was disconnected and freed. reset controller. 00:26:01.397 [2024-07-24 19:02:39.779538] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:01.397 [2024-07-24 19:02:39.779563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.397 [2024-07-24 19:02:39.779576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.397 [2024-07-24 19:02:39.779597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.397 [2024-07-24 19:02:39.779621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.397 [2024-07-24 19:02:39.779642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.397 [2024-07-24 19:02:39.779651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.397 [2024-07-24 19:02:39.783936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.397 [2024-07-24 19:02:39.783972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffa30 (9): Bad file descriptor 00:26:01.397 [2024-07-24 19:02:39.866013] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:01.397 00:26:01.397 Latency(us) 00:26:01.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.397 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:01.397 Verification LBA range: start 0x0 length 0x4000 00:26:01.397 NVMe0n1 : 15.03 4894.08 19.12 499.00 0.00 23696.40 934.63 38606.66 00:26:01.397 =================================================================================================================== 00:26:01.397 Total : 4894.08 19.12 499.00 0.00 23696.40 934.63 38606.66 00:26:01.397 Received shutdown signal, test time was about 15.000000 seconds 00:26:01.397 00:26:01.397 Latency(us) 00:26:01.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.397 =================================================================================================================== 00:26:01.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2624133 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2624133 /var/tmp/bdevperf.sock 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2624133 ']' 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:01.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.397 19:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:01.397 19:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.397 19:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:01.397 19:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:01.397 [2024-07-24 19:02:46.382639] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:01.656 19:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:01.656 [2024-07-24 19:02:46.639599] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:01.915 19:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:02.174 NVMe0n1 00:26:02.174 19:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:02.432 00:26:02.432 19:02:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:03.000 00:26:03.000 19:02:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:03.000 19:02:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:03.259 19:02:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:03.518 19:02:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:06.803 19:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.803 19:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:06.803 19:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2625063 00:26:06.803 19:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:06.803 19:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2625063 00:26:07.740 0 00:26:07.740 19:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:07.740 [2024-07-24 19:02:45.888188] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:26:07.740 [2024-07-24 19:02:45.888255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2624133 ] 00:26:07.740 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.740 [2024-07-24 19:02:45.968730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.740 [2024-07-24 19:02:46.050042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.740 [2024-07-24 19:02:48.282265] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:07.740 [2024-07-24 19:02:48.282318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.740 [2024-07-24 19:02:48.282334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.740 [2024-07-24 19:02:48.282346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.740 [2024-07-24 19:02:48.282356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.740 [2024-07-24 19:02:48.282366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.740 [2024-07-24 19:02:48.282376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.740 [2024-07-24 19:02:48.282386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.740 [2024-07-24 19:02:48.282396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.740 [2024-07-24 19:02:48.282405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.740 [2024-07-24 19:02:48.282438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.740 [2024-07-24 19:02:48.282456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bca30 (9): Bad file descriptor 00:26:07.740 [2024-07-24 19:02:48.288191] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:07.740 Running I/O for 1 seconds... 00:26:07.740 00:26:07.740 Latency(us) 00:26:07.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.740 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:07.740 Verification LBA range: start 0x0 length 0x4000 00:26:07.740 NVMe0n1 : 1.02 3737.99 14.60 0.00 0.00 34085.12 5064.15 38368.35 00:26:07.740 =================================================================================================================== 00:26:07.740 Total : 3737.99 14.60 0.00 0.00 34085.12 5064.15 38368.35 00:26:07.999 19:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:07.999 19:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:08.258 19:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:08.516 19:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:08.516 19:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:08.775 19:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:09.034 19:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:12.329 19:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:12.329 19:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2624133 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2624133 ']' 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2624133 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2624133 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2624133' 00:26:12.329 killing process with pid 2624133 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2624133 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2624133 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:12.329 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:12.587 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:12.587 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:12.587 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:12.587 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:12.587 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:12.587 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:12.587 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:12.587 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:12.587 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:12.587 rmmod nvme_tcp 00:26:12.846 rmmod nvme_fabrics 00:26:12.846 rmmod nvme_keyring 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2620734 ']' 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2620734 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2620734 ']' 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2620734 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2620734 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2620734' 00:26:12.846 killing process with pid 2620734 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2620734 00:26:12.846 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2620734 00:26:13.140 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:13.140 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:13.140 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:13.140 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.140 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:13.140 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.140 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.140 19:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.043 19:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:15.043 00:26:15.043 real 0m39.570s 00:26:15.043 user 2m8.052s 00:26:15.043 sys 0m7.784s 00:26:15.043 19:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:15.043 19:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:15.043 ************************************ 00:26:15.043 END TEST nvmf_failover 00:26:15.043 ************************************ 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.302 ************************************ 00:26:15.302 START TEST nvmf_host_discovery 00:26:15.302 ************************************ 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:15.302 * Looking for test storage... 00:26:15.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:15.302 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:15.303 19:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:21.869 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:21.869 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:21.869 Found net devices under 0000:af:00.0: cvl_0_0 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:21.869 Found net devices under 0000:af:00.1: cvl_0_1 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:21.869 19:03:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.869 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.869 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.869 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:21.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:26:21.869 00:26:21.869 --- 10.0.0.2 ping statistics --- 00:26:21.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.869 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:26:21.869 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:26:21.869 00:26:21.869 --- 10.0.0.1 ping statistics --- 00:26:21.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.869 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:26:21.869 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.869 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:21.869 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:21.869 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2629891 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2629891 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2629891 ']' 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.870 19:03:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.870 [2024-07-24 19:03:06.177119] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:26:21.870 [2024-07-24 19:03:06.177183] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.870 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.870 [2024-07-24 19:03:06.264972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.870 [2024-07-24 19:03:06.372183] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.870 [2024-07-24 19:03:06.372224] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.870 [2024-07-24 19:03:06.372237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.870 [2024-07-24 19:03:06.372248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.870 [2024-07-24 19:03:06.372257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.870 [2024-07-24 19:03:06.372281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.128 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.128 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:22.128 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:22.128 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:22.128 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.387 [2024-07-24 19:03:07.156744] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.387 [2024-07-24 19:03:07.164908] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.387 null0 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.387 null1 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2630170 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2630170 /tmp/host.sock 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2630170 ']' 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:22.387 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.387 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.387 [2024-07-24 19:03:07.242450] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:26:22.387 [2024-07-24 19:03:07.242504] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630170 ] 00:26:22.387 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.387 [2024-07-24 19:03:07.323709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.646 [2024-07-24 19:03:07.413963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:22.646 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 [2024-07-24 19:03:07.878901] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.905 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.164 19:03:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:23.164 19:03:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:23.732 [2024-07-24 19:03:08.544028] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:23.732 [2024-07-24 19:03:08.544051] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:23.732 [2024-07-24 19:03:08.544068] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.732 [2024-07-24 19:03:08.630358] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:23.732 [2024-07-24 19:03:08.688091] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:23.732 [2024-07-24 19:03:08.688114] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:24.300 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.301 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:24.559 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:24.818 19:03:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.755 [2024-07-24 19:03:10.715914] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:25.755 [2024-07-24 19:03:10.717134] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:25.755 [2024-07-24 19:03:10.717165] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.755 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.014 [2024-07-24 19:03:10.803418] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:26.014 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:26.015 19:03:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:26.015 [2024-07-24 19:03:10.903156] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:26.015 [2024-07-24 19:03:10.903179] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:26.015 [2024-07-24 19:03:10.903186] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:26.951 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.211 [2024-07-24 19:03:11.996018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.211 [2024-07-24 19:03:11.996049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.211 [2024-07-24 19:03:11.996062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.211 [2024-07-24 19:03:11.996077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.211 [2024-07-24 19:03:11.996088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.211 [2024-07-24 19:03:11.996097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.211 [2024-07-24 19:03:11.996107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.211 [2024-07-24 19:03:11.996117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.211 [2024-07-24 19:03:11.996127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba500 is same with the state(6) to be set 00:26:27.211 [2024-07-24 19:03:11.996756] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:27.211 [2024-07-24 19:03:11.996775] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:27.211 19:03:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:27.211 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:27.211 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:27.211 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:27.211 [2024-07-24 19:03:12.006024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba500 (9): Bad file descriptor 00:26:27.211 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:27.211 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.211 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:27.211 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.211 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:27.211 [2024-07-24 19:03:12.016068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.211 [2024-07-24 19:03:12.016396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.211 [2024-07-24 19:03:12.016416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba500 with addr=10.0.0.2, port=4420 00:26:27.211 [2024-07-24 19:03:12.016428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba500 is same with the state(6) to be set 00:26:27.211 [2024-07-24 19:03:12.016445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba500 (9): Bad file descriptor 00:26:27.211 [2024-07-24 19:03:12.016458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:27.211 [2024-07-24 19:03:12.016468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:27.211 [2024-07-24 19:03:12.016478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:27.211 [2024-07-24 19:03:12.016493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.211 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.211 [2024-07-24 19:03:12.026136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.211 [2024-07-24 19:03:12.026443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.211 [2024-07-24 19:03:12.026459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba500 with addr=10.0.0.2, port=4420 00:26:27.211 [2024-07-24 19:03:12.026470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba500 is same with the state(6) to be set 00:26:27.211 [2024-07-24 19:03:12.026484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba500 (9): Bad file descriptor 00:26:27.211 [2024-07-24 19:03:12.026497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:27.212 [2024-07-24 19:03:12.026506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:27.212 [2024-07-24 19:03:12.026516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:27.212 [2024-07-24 19:03:12.026529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.212 [2024-07-24 19:03:12.036194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.212 [2024-07-24 19:03:12.036488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.212 [2024-07-24 19:03:12.036505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba500 with addr=10.0.0.2, port=4420 00:26:27.212 [2024-07-24 19:03:12.036514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba500 is same with the state(6) to be set 00:26:27.212 [2024-07-24 19:03:12.036529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba500 (9): Bad file descriptor 00:26:27.212 [2024-07-24 19:03:12.036542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:27.212 [2024-07-24 19:03:12.036551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:27.212 [2024-07-24 19:03:12.036560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:27.212 [2024-07-24 19:03:12.036573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.212 [2024-07-24 19:03:12.046255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.212 [2024-07-24 19:03:12.046556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.212 [2024-07-24 19:03:12.046574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba500 with addr=10.0.0.2, port=4420 00:26:27.212 [2024-07-24 19:03:12.046585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba500 is same with the state(6) to be set 00:26:27.212 [2024-07-24 19:03:12.046600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba500 (9): Bad file descriptor 00:26:27.212 [2024-07-24 19:03:12.046620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:27.212 [2024-07-24 19:03:12.046629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:27.212 [2024-07-24 19:03:12.046638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:27.212 [2024-07-24 19:03:12.046651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:27.212 [2024-07-24 19:03:12.056319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.212 [2024-07-24 19:03:12.056644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.212 [2024-07-24 19:03:12.056667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba500 with addr=10.0.0.2, port=4420 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:27.212 [2024-07-24 19:03:12.056677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba500 is same with the state(6) to be set 00:26:27.212 [2024-07-24 19:03:12.056695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba500 (9): Bad file descriptor 00:26:27.212 [2024-07-24 19:03:12.056709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:27.212 [2024-07-24 19:03:12.056718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:27.212 [2024-07-24 19:03:12.056727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:27.212 [2024-07-24 19:03:12.056741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:27.212 [2024-07-24 19:03:12.066381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.212 [2024-07-24 19:03:12.066597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.212 [2024-07-24 19:03:12.066621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba500 with addr=10.0.0.2, port=4420 00:26:27.212 [2024-07-24 19:03:12.066631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba500 is same with the state(6) to be set 00:26:27.212 [2024-07-24 19:03:12.066645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba500 (9): Bad file descriptor 00:26:27.212 [2024-07-24 19:03:12.066659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:27.212 [2024-07-24 19:03:12.066669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:27.212 [2024-07-24 19:03:12.066678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:27.212 [2024-07-24 19:03:12.066692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.212 [2024-07-24 19:03:12.076441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.212 [2024-07-24 19:03:12.076722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.212 [2024-07-24 19:03:12.076748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba500 with addr=10.0.0.2, port=4420 00:26:27.212 [2024-07-24 19:03:12.076759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba500 is same with the state(6) to be set 00:26:27.212 [2024-07-24 19:03:12.076778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba500 (9): Bad file descriptor 00:26:27.212 [2024-07-24 19:03:12.076793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:27.212 [2024-07-24 19:03:12.076802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:27.212 [2024-07-24 19:03:12.076812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:27.212 [2024-07-24 19:03:12.076825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.212 [2024-07-24 19:03:12.083349] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:27.212 [2024-07-24 19:03:12.083371] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:27.212 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.213 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.472 19:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.848 [2024-07-24 19:03:13.448829] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:28.848 [2024-07-24 19:03:13.448851] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:28.848 [2024-07-24 19:03:13.448867] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:28.848 [2024-07-24 19:03:13.537165] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:28.848 [2024-07-24 19:03:13.807969] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:28.848 [2024-07-24 19:03:13.808013] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.848 request: 00:26:28.848 { 00:26:28.848 "name": "nvme", 00:26:28.848 "trtype": "tcp", 00:26:28.848 "traddr": "10.0.0.2", 00:26:28.848 "adrfam": "ipv4", 00:26:28.848 "trsvcid": "8009", 00:26:28.848 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:28.848 "wait_for_attach": true, 00:26:28.848 "method": "bdev_nvme_start_discovery", 00:26:28.848 "req_id": 1 00:26:28.848 } 00:26:28.848 Got JSON-RPC error response 00:26:28.848 response: 00:26:28.848 { 00:26:28.848 "code": -17, 00:26:28.848 "message": "File exists" 00:26:28.848 } 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:28.848 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:28.849 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:28.849 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.849 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:28.849 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.849 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:28.849 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.107 request: 00:26:29.107 { 00:26:29.107 "name": "nvme_second", 00:26:29.107 "trtype": "tcp", 00:26:29.107 "traddr": "10.0.0.2", 00:26:29.107 "adrfam": "ipv4", 00:26:29.107 "trsvcid": "8009", 00:26:29.107 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:29.107 "wait_for_attach": true, 00:26:29.107 "method": "bdev_nvme_start_discovery", 00:26:29.107 "req_id": 1 00:26:29.107 } 00:26:29.107 Got JSON-RPC error response 00:26:29.107 response: 00:26:29.107 { 00:26:29.107 "code": -17, 00:26:29.107 "message": "File exists" 00:26:29.107 } 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:29.107 19:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.107 19:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.486 [2024-07-24 19:03:15.072884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.486 [2024-07-24 19:03:15.072919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f71e0 with addr=10.0.0.2, port=8010 00:26:30.486 [2024-07-24 19:03:15.072935] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:30.486 [2024-07-24 19:03:15.072944] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:30.486 [2024-07-24 19:03:15.072953] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:31.422 [2024-07-24 19:03:16.075230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.422 [2024-07-24 19:03:16.075261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f71e0 with addr=10.0.0.2, port=8010 00:26:31.422 [2024-07-24 19:03:16.075276] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:31.422 [2024-07-24 19:03:16.075284] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:31.422 [2024-07-24 19:03:16.075293] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:32.359 [2024-07-24 19:03:17.077420] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:32.359 request: 00:26:32.359 { 00:26:32.359 "name": "nvme_second", 00:26:32.359 "trtype": "tcp", 00:26:32.359 "traddr": "10.0.0.2", 00:26:32.359 "adrfam": "ipv4", 00:26:32.359 "trsvcid": "8010", 00:26:32.359 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:32.359 "wait_for_attach": false, 00:26:32.359 "attach_timeout_ms": 3000, 00:26:32.359 "method": "bdev_nvme_start_discovery", 00:26:32.359 "req_id": 1 00:26:32.359 } 00:26:32.359 Got JSON-RPC error response 00:26:32.359 response: 00:26:32.359 { 00:26:32.359 "code": -110, 00:26:32.359 "message": "Connection timed out" 00:26:32.359 } 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2630170 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:32.359 rmmod nvme_tcp 00:26:32.359 rmmod nvme_fabrics 00:26:32.359 rmmod nvme_keyring 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2629891 ']' 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2629891 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2629891 ']' 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2629891 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2629891 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2629891' 00:26:32.359 killing process with pid 2629891 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2629891 00:26:32.359 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2629891 00:26:32.619 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:32.619 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:32.619 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:32.619 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.619 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:32.619 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.619 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.619 19:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:35.153 00:26:35.153 real 0m19.456s 00:26:35.153 user 0m24.664s 00:26:35.153 sys 0m6.040s 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.153 ************************************ 00:26:35.153 END TEST nvmf_host_discovery 00:26:35.153 ************************************ 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.153 ************************************ 00:26:35.153 START TEST nvmf_host_multipath_status 00:26:35.153 ************************************ 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:35.153 * Looking for test storage... 00:26:35.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.153 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:35.154 19:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:40.424 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:40.424 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.424 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:40.425 Found net devices under 0000:af:00.0: cvl_0_0 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:40.425 Found net devices under 0000:af:00.1: cvl_0_1 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.425 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:26:40.683 00:26:40.683 --- 10.0.0.2 ping statistics --- 00:26:40.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.683 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:26:40.683 00:26:40.683 --- 10.0.0.1 ping statistics --- 00:26:40.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.683 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2636031 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2636031 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2636031 ']' 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:40.683 19:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:40.683 [2024-07-24 19:03:25.561623] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:26:40.683 [2024-07-24 19:03:25.561681] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.683 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.683 [2024-07-24 19:03:25.647465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:40.941 [2024-07-24 19:03:25.743013] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.941 [2024-07-24 19:03:25.743053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.941 [2024-07-24 19:03:25.743063] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.941 [2024-07-24 19:03:25.743072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.942 [2024-07-24 19:03:25.743080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.942 [2024-07-24 19:03:25.743130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.942 [2024-07-24 19:03:25.743136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.199 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.199 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:41.199 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:41.199 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:41.199 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:41.199 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.199 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2636031 00:26:41.199 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:41.458 [2024-07-24 19:03:26.279792] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.458 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:41.716 Malloc0 00:26:41.716 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:41.975 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:41.975 19:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.233 [2024-07-24 19:03:27.105933] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.233 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:42.491 [2024-07-24 19:03:27.278400] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:42.491 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:42.491 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2636322 00:26:42.491 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:42.491 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2636322 /var/tmp/bdevperf.sock 00:26:42.491 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2636322 ']' 00:26:42.491 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:42.491 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.491 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:42.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:42.491 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.491 19:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:43.426 19:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:43.426 19:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:43.426 19:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:43.685 19:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:43.943 Nvme0n1 00:26:43.943 19:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:44.509 Nvme0n1 00:26:44.509 19:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:44.509 19:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:46.412 19:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:46.413 19:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:46.672 19:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:46.930 19:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:47.876 19:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:47.876 19:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:47.876 19:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.876 19:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:48.135 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.135 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:48.135 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.135 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:48.393 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:48.393 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.393 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.393 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.652 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.652 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.652 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.652 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.911 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.911 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.911 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.911 19:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:49.169 19:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.169 19:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:49.169 19:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.169 19:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:49.428 19:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.428 19:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:49.428 19:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:49.687 19:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:49.945 19:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:50.881 19:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:50.881 19:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:50.881 19:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.881 19:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:51.139 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:51.139 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:51.139 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.139 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:51.398 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.398 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:51.398 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.398 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:51.657 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.657 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:51.657 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.657 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:51.917 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.917 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:51.917 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.917 19:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:52.176 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.176 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:52.176 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.176 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:52.434 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.434 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:52.434 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:52.693 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:52.952 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:53.887 19:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:53.887 19:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:53.887 19:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.887 19:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:54.146 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.146 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:54.146 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.146 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:54.404 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:54.404 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:54.404 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.404 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:54.662 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.662 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:54.662 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.662 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:54.920 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.920 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:54.920 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.920 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:55.179 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.179 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:55.179 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.179 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:55.437 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.437 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:55.437 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:55.696 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:55.955 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:56.892 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:56.892 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:56.892 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.892 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:57.151 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.151 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:57.151 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.151 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:57.410 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.410 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:57.410 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.410 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:57.669 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.669 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:57.669 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.669 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:57.927 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.927 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:58.185 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.185 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:58.443 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.443 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:58.443 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.444 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:58.706 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.706 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:58.706 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:58.964 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:59.262 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:00.198 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:00.198 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:00.198 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.198 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:00.456 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:00.457 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:00.457 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.457 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:00.715 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:00.715 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:00.715 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.715 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:00.975 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.975 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:00.975 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.975 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:01.233 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.233 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:01.233 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.233 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:01.233 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:01.233 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:01.233 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.233 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:01.492 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:01.492 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:01.492 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:01.751 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:02.010 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:02.946 19:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:02.946 19:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:02.946 19:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.946 19:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:03.204 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:03.204 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:03.204 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.204 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:03.463 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.463 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:03.463 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.463 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:03.721 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.721 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:03.721 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.721 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:03.981 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.981 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:03.981 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.981 19:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:04.239 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:04.239 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:04.239 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.239 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:04.498 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.498 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:04.757 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:04.757 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:05.015 19:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:05.274 19:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:06.650 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:06.651 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:06.651 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.651 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:06.651 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.651 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:06.651 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.651 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:06.909 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.909 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:06.909 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.909 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:07.168 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.168 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:07.168 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.168 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:07.427 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.427 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:07.427 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.427 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:07.686 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.686 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:07.686 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.686 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:07.945 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.945 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:07.945 19:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:08.204 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:08.462 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:09.397 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:09.397 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:09.397 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.397 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:09.656 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:09.656 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:09.656 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.656 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:09.915 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.915 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:09.915 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.915 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:10.173 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.173 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:10.173 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.173 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:10.432 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.432 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:10.432 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:10.432 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.690 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.690 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:10.690 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.690 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:10.948 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.948 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:10.948 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:11.207 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:11.465 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:12.845 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:12.845 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:12.845 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.845 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:12.845 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.845 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:12.845 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.845 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:13.103 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.103 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:13.103 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:13.103 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.360 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.360 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:13.360 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.360 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:13.617 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.617 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:13.617 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.618 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:13.875 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.875 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:13.875 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.875 19:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:14.133 19:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.133 19:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:14.133 19:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:14.391 19:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:14.649 19:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:15.584 19:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:15.584 19:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:15.584 19:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.584 19:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:15.843 19:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.843 19:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:15.843 19:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.843 19:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:16.410 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.410 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:16.410 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.410 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:16.410 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.410 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:16.410 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.410 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:16.669 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.669 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:16.669 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.669 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:16.927 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.927 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:16.927 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.928 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2636322 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2636322 ']' 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2636322 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2636322 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2636322' 00:27:17.186 killing process with pid 2636322 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2636322 00:27:17.186 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2636322 00:27:17.461 Connection closed with partial response: 00:27:17.461 00:27:17.461 00:27:17.461 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2636322 00:27:17.461 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:17.461 [2024-07-24 19:03:27.343181] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:27:17.461 [2024-07-24 19:03:27.343229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636322 ] 00:27:17.461 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.461 [2024-07-24 19:03:27.443713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.461 [2024-07-24 19:03:27.587155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.461 Running I/O for 90 seconds... 00:27:17.461 [2024-07-24 19:03:43.700284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.461 [2024-07-24 19:03:43.700361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.700419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.700445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.700487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.700509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.700550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.700573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.700620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.700643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.700683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.700705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.700747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.700769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.700809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.700831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.700871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.700892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.700932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.700954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.700993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.701025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.701065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.701087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.701127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.701149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.701188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.701211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.701250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.461 [2024-07-24 19:03:43.701272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:17.461 [2024-07-24 19:03:43.701311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.701372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.701435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.701497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.701558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.701627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.701690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.701751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.701817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.701879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.701941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.701962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.702001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.702023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.702062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.702084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.702124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.702145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.702185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.702206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.702247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.702268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.702309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.702330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.704523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.704567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.704628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.704651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.704692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.704714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.704760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.704783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.704822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.704844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.704884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.704905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.704945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.462 [2024-07-24 19:03:43.704966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.462 [2024-07-24 19:03:43.705654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:17.462 [2024-07-24 19:03:43.705694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.705716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.705756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.705777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.705817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.705838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.705878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.705900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.705940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.705961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.463 [2024-07-24 19:03:43.706919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.706960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.706982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.463 [2024-07-24 19:03:43.707726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.463 [2024-07-24 19:03:43.707748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.707787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.707809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.707857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.707878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.707923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.707945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.707985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.708006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.708045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.708067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.708110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.708143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.708185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.708207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.708247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.708268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.708308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.708329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.708370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.708391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.709940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.709979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.710937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.710978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.711039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.711112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.711173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.711234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.711295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.711356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.711417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.711481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.711544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:17.464 [2024-07-24 19:03:43.711613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.464 [2024-07-24 19:03:43.711636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.711677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.711698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.711737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.711770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.711816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.711842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.711882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.711904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.711944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.711965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.465 [2024-07-24 19:03:43.712330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.712954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.712993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:17.465 [2024-07-24 19:03:43.713747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.465 [2024-07-24 19:03:43.713769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.713808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.713829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.713869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.713890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.713929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.713950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.713990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.714011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.714051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.714072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.714110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.714132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.714172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.714198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.714239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.714260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.716280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.716351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.716429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.716493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.716559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.716659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.716745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.466 [2024-07-24 19:03:43.716807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.716869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.716931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.716971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.716999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.717040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.717062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.717101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.717123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.717162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.717184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.717224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.717245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.717285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.717307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.717346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.717368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.717409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.717430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.717470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.466 [2024-07-24 19:03:43.717491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:17.466 [2024-07-24 19:03:43.717540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.717562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.717611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.717633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.717673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.717694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.717734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.717755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.717810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.717834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.717874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.717895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.717935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.717956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.717996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.467 [2024-07-24 19:03:43.718707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.718768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.718829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.718890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.718951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.718991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.719012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.719052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.719073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.719113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.719135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.719174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.719197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.719237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.719258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.719298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.719324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.467 [2024-07-24 19:03:43.719364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.467 [2024-07-24 19:03:43.719385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.719426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.719446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.719486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.719508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.719548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.719569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.719620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.719643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.719682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.719704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.719743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.719765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.719804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.719826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.719866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.719887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.719926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.719947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.719986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.720008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.720048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.720073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.721617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.721654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.721700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.721722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.721763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.721785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.721826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.721848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.721888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.721909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.721950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.721972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.468 [2024-07-24 19:03:43.722842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:17.468 [2024-07-24 19:03:43.722881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.722902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.722943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.722964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.723898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.469 [2024-07-24 19:03:43.723960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.723999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:17.469 [2024-07-24 19:03:43.724933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.469 [2024-07-24 19:03:43.724954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.724994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.725753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.725774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.727673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.727714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.727759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.727781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.727821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.727843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.727883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.727904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.727944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.727965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.728005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.728026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.728077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.728099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.728140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.728161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.728201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.470 [2024-07-24 19:03:43.728222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.728262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.470 [2024-07-24 19:03:43.728285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.728326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.470 [2024-07-24 19:03:43.728350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.728390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.470 [2024-07-24 19:03:43.728413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:17.470 [2024-07-24 19:03:43.728453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.728475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.728516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.728539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.728579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.728611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.728653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.728675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.728715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.728738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.728779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.728802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.728842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.728870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.728911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.728934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.728974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.728996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.471 [2024-07-24 19:03:43.729248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.729952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.729993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.730016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.730056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.730079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.730119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.471 [2024-07-24 19:03:43.730142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.730182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.471 [2024-07-24 19:03:43.730204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.730244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.471 [2024-07-24 19:03:43.730267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.730307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.471 [2024-07-24 19:03:43.730330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.730370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.471 [2024-07-24 19:03:43.730393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.730437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.471 [2024-07-24 19:03:43.730460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:17.471 [2024-07-24 19:03:43.730500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.730523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.730563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.730585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.730632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.730656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.730696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.730719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.730759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.730782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.730822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.730845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.730885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.730907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.730947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.730970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.731010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.731033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.731073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.731096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.731136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.731158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.731202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.731225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.731265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.731288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.731329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.731352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.731392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.731414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.731455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.731478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.733016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.733051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.733095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.733119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.733160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.733183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.733223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.733245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.733285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.733308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.733348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.733370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.733412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.472 [2024-07-24 19:03:43.733435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.472 [2024-07-24 19:03:43.733476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.733505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.733546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.733569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.733618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.733642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.733683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.733706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.733746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.733769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.733809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.733832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.733872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.733894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.733935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.733957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.733997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.734947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.734970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.473 [2024-07-24 19:03:43.735477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:17.473 [2024-07-24 19:03:43.735714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.473 [2024-07-24 19:03:43.735737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.735777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.735801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.735845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.735868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.735908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.735930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.735971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.735994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.736951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.736990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.737013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.737052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.737075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.737115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.737138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.737178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.737200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.737241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.737264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.739154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.739193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.739238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.739268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.739310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.739333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.739374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.474 [2024-07-24 19:03:43.739396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:17.474 [2024-07-24 19:03:43.739436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.475 [2024-07-24 19:03:43.739460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.739500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.475 [2024-07-24 19:03:43.739522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.739562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.475 [2024-07-24 19:03:43.739585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.739636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.475 [2024-07-24 19:03:43.739660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.739699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.475 [2024-07-24 19:03:43.739721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.739761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.475 [2024-07-24 19:03:43.739784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.739824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.739847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.739887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.739909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.739949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.739971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.475 [2024-07-24 19:03:43.740801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.740972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.740994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.475 [2024-07-24 19:03:43.741707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.475 [2024-07-24 19:03:43.741769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:17.475 [2024-07-24 19:03:43.741810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.741832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.741872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.741895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.741934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.741957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.741998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.742958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.742981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.744512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.744547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.744591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.744625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.744666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.744690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.744742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.744765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.744805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.744828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.744868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.744891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.744931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.744954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:17.476 [2024-07-24 19:03:43.744994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.476 [2024-07-24 19:03:43.745018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.745952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.745974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.477 [2024-07-24 19:03:43.746947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:17.477 [2024-07-24 19:03:43.746986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.478 [2024-07-24 19:03:43.747075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.747950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.747973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.748777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.748800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.749348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.749383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.749462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.749487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.749543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.749567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.749631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.749655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.749710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.749732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:17.478 [2024-07-24 19:03:43.749787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.478 [2024-07-24 19:03:43.749809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.749863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.479 [2024-07-24 19:03:43.749885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.749940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.479 [2024-07-24 19:03:43.749962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.479 [2024-07-24 19:03:43.750038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.479 [2024-07-24 19:03:43.750121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.479 [2024-07-24 19:03:43.750199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.750277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.750353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.750432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.750509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.750586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.750674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.750751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.750829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.750908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.750962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.750985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.479 [2024-07-24 19:03:43.751450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.751975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.751998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.752056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.752079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.752134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.752156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.752210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.752233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.752287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.752310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.752364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.752386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.752440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.752463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.752517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-07-24 19:03:43.752540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.752593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.479 [2024-07-24 19:03:43.752626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:17.479 [2024-07-24 19:03:43.752681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.752704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.752758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.752781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.752835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.752858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.752912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.752936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.752994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.753936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.753959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.754015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.754038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:43.754443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:43.754472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.551639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:59.551713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.551767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:59.551791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.551832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:59.551855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.551895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:59.551917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.551957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:59.551978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.480 [2024-07-24 19:03:59.552040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.480 [2024-07-24 19:03:59.552102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.480 [2024-07-24 19:03:59.552164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.480 [2024-07-24 19:03:59.552226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.480 [2024-07-24 19:03:59.552297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.480 [2024-07-24 19:03:59.552358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.480 [2024-07-24 19:03:59.552420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.480 [2024-07-24 19:03:59.552481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:59.552543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:59.552614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:59.552677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:17.480 [2024-07-24 19:03:59.552717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.480 [2024-07-24 19:03:59.552739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.552779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.552802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.552841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.552863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.552903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.552925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.552964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.552986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.553051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.553113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.553175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.553237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.553298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.553360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.553421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.553484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.553545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.553615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.553677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.553738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.553800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.553865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.553926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.553965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.553987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.554048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.554109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.554171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.554232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.554293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.554355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.554416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.554477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.554537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.554616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.554680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.554743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.554813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.554878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.481 [2024-07-24 19:03:59.554943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.554983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.481 [2024-07-24 19:03:59.555005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.481 [2024-07-24 19:03:59.555045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.555068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.555129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.555190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.555251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.555312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.555384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.555444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.555505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.555566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.555635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.555696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.555737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.555758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.560336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.560386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.560433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.560455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.560496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.560517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.560557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.560578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.560630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.560653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.560692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.560721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.560761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.560783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.560822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.560844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.560884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.560905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.560945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.560967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.561006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.561027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.561067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.482 [2024-07-24 19:03:59.561088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.561127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.561149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.561187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.561209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.561248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.561269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.561309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.561330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.561370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.561391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.561430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.561452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.561495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.561517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:17.482 [2024-07-24 19:03:59.561556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.482 [2024-07-24 19:03:59.561578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.561626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.561648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.561688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.561709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.561748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.561769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.561809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.561830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.561870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.561891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.561930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.561952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.561991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.562013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.562052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.562074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.562113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.562134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.562174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.562196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.562240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.562261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.562301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.562323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.562362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.562384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.562423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.562444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.562485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.562507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.565087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.565155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.565217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.565278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.565339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.565400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.565462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.565530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.565592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.565663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.565724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.565785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.565846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.565906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.565946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.565967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.566007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.566028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.566067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.566089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.566129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.566150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.566189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.566211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.566250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.566276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.566316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.483 [2024-07-24 19:03:59.566337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.566376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.566398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.566438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.483 [2024-07-24 19:03:59.566459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:17.483 [2024-07-24 19:03:59.566498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.566520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.566559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.566581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.566632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.566656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.566696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.566717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.566757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.566778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.566818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.566839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.566879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.566900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.566940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.566961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.567000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.567022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.567065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.567087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.567126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.567148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.567188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.567210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.567249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.567271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.567311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.567332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.567373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.567394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.569390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.569432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.569477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.569499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.569539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.569562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.569613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.569636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.569675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.569696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.569736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.569758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.569804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.569826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.569873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.569895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.569936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.569957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.573228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.573294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.573356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.573417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.573478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.573538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.573599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.573675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.573735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.573803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.573865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.573926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.573965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.484 [2024-07-24 19:03:59.573987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.574026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.484 [2024-07-24 19:03:59.574048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.484 [2024-07-24 19:03:59.574087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.574108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.574169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.574229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.574291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.574352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.574413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.574474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.574539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.574600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.574673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.574734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.574794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.574856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.574917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.574956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.574978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.575018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.575039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.575078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.575100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.575140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.575161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.575200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.575222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.575261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.575282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.575329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.575351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.575391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.575412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.575453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.575474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.580466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.580511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.580558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.580582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.580633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.580656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.580696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.580718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.580758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.580780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.580820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.580841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.580881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.580903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.580942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.580964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.581004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.581025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.581073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.581095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.581135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.581156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.581195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.581217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.581257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.581278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.581318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.581339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.581379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.485 [2024-07-24 19:03:59.581401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:17.485 [2024-07-24 19:03:59.581441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.485 [2024-07-24 19:03:59.581462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.581502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.581523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.581564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.581585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.581633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.581655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.581694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.581716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.581756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.581777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.581817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.581843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.581883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.581905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.581944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.581966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.582027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.582089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.582150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.582211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.582273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.582334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.582395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.582456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.582518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.582585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.582659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.582720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.582782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.582843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.582905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.582944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.582966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.583005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.583027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.583066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.583088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.583127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.583149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.583189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.583210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.583250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.486 [2024-07-24 19:03:59.583271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.583312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.583338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.583378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.583399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.583439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.583460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.583501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.583522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.587471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.587515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.587560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.587583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:17.486 [2024-07-24 19:03:59.587636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.486 [2024-07-24 19:03:59.587658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.587698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.587719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.587759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.587781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.587821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.587843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.587883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.587904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.587944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.587966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.588027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.588102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.588163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.588224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.588286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.588347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.588408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.588470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.588531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.588592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.588863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.588924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.588963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.588985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.589030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.589051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.589092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.589112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.589152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.589174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.589214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.589235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.589275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.589296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.589337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.589357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.589397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.589418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.589458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.589479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.589520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.589541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.589582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.589610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.591366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.591408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.591452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.487 [2024-07-24 19:03:59.591475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:17.487 [2024-07-24 19:03:59.591516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.487 [2024-07-24 19:03:59.591545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.591585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.591618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.591659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.591681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.591720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.591742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.591781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.591803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.591843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.591864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.591904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.591925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.591965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.591987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.592028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.592049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.595287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.595354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.595416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.595484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.595546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.595619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.595681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.595743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.595803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.595865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.595926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.595965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.595987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.596049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.596110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.596171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.596232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.596298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.596359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.596421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.596482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.596543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.596613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.596675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.596737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.596798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.596860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.596900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.488 [2024-07-24 19:03:59.596922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.601878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.601924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.601998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.602024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:17.488 [2024-07-24 19:03:59.602063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.488 [2024-07-24 19:03:59.602084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.602146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.602206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.602268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.602329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.602390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.602451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.602513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.602574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.602651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.602712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.602778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.602840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.602901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.602941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.602963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.603025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.603086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.603147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.603208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.603269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.603330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.603391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.603453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.603517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.603578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.603650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.603712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.603773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.603834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.603894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.603956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.603996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.604018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.604058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.604080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.604119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.604141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.604181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.604202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.604241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.489 [2024-07-24 19:03:59.604263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:17.489 [2024-07-24 19:03:59.604308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.489 [2024-07-24 19:03:59.604330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.490 Received shutdown signal, test time was about 32.566830 seconds 00:27:17.490 00:27:17.490 Latency(us) 00:27:17.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.490 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:17.490 Verification LBA range: start 0x0 length 0x4000 00:27:17.490 Nvme0n1 : 32.56 4598.24 17.96 0.00 0.00 27757.79 1489.45 4087539.90 00:27:17.490 =================================================================================================================== 00:27:17.490 Total : 4598.24 17.96 0.00 0.00 27757.79 1489.45 4087539.90 00:27:17.490 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.748 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:17.748 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:17.748 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:17.748 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:17.748 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:17.748 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:17.748 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:17.748 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:17.749 rmmod nvme_tcp 00:27:17.749 rmmod nvme_fabrics 00:27:17.749 rmmod nvme_keyring 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2636031 ']' 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2636031 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2636031 ']' 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2636031 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:17.749 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2636031 00:27:18.008 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:18.008 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:18.008 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2636031' 00:27:18.008 killing process with pid 2636031 00:27:18.008 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2636031 00:27:18.008 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2636031 00:27:18.267 19:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.267 19:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.267 19:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.267 19:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.267 19:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.267 19:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.267 19:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.267 19:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.169 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.169 00:27:20.169 real 0m45.464s 00:27:20.169 user 2m9.354s 00:27:20.169 sys 0m11.188s 00:27:20.169 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.169 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:20.169 ************************************ 00:27:20.169 END TEST nvmf_host_multipath_status 00:27:20.169 ************************************ 00:27:20.169 19:04:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:20.169 19:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:20.169 19:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.169 19:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.169 ************************************ 00:27:20.169 START TEST nvmf_discovery_remove_ifc 00:27:20.169 ************************************ 00:27:20.169 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:20.428 * Looking for test storage... 00:27:20.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.428 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.429 19:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.000 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.000 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:27.000 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:27.000 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:27.000 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:27.000 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:27.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:27.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:27.001 Found net devices under 0000:af:00.0: cvl_0_0 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:27.001 Found net devices under 0000:af:00.1: cvl_0_1 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.001 19:04:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:27.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:27:27.001 00:27:27.001 --- 10.0.0.2 ping statistics --- 00:27:27.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.001 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:27:27.001 00:27:27.001 --- 10.0.0.1 ping statistics --- 00:27:27.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.001 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:27.001 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2646258 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2646258 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2646258 ']' 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.002 19:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.002 [2024-07-24 19:04:11.267401] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:27:27.002 [2024-07-24 19:04:11.267509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.002 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.002 [2024-07-24 19:04:11.395000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.002 [2024-07-24 19:04:11.499849] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.002 [2024-07-24 19:04:11.499903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.002 [2024-07-24 19:04:11.499916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.002 [2024-07-24 19:04:11.499927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.002 [2024-07-24 19:04:11.499936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.002 [2024-07-24 19:04:11.499969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.261 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:27.261 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:27.261 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:27.261 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:27.261 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.261 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.261 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:27.261 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.261 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.261 [2024-07-24 19:04:12.227952] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.261 [2024-07-24 19:04:12.236112] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:27.261 null0 00:27:27.261 [2024-07-24 19:04:12.268124] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.520 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.520 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2646528 00:27:27.520 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2646528 /tmp/host.sock 00:27:27.520 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:27.520 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2646528 ']' 00:27:27.520 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:27.520 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.520 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:27.520 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:27.520 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.520 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.520 [2024-07-24 19:04:12.343614] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:27:27.520 [2024-07-24 19:04:12.343677] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2646528 ] 00:27:27.520 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.520 [2024-07-24 19:04:12.427945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.520 [2024-07-24 19:04:12.518547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.779 19:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.715 [2024-07-24 19:04:13.655205] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:28.715 [2024-07-24 19:04:13.655229] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:28.715 [2024-07-24 19:04:13.655246] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:28.975 [2024-07-24 19:04:13.783698] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:28.975 [2024-07-24 19:04:13.887489] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:28.975 [2024-07-24 19:04:13.887548] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:28.975 [2024-07-24 19:04:13.887576] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:28.975 [2024-07-24 19:04:13.887594] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:28.975 [2024-07-24 19:04:13.887627] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.975 [2024-07-24 19:04:13.894266] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb67450 was disconnected and freed. delete nvme_qpair. 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:28.975 19:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:29.234 19:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:30.169 19:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.169 19:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.169 19:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.169 19:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.169 19:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.169 19:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.169 19:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.169 19:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.169 19:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:30.169 19:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.546 19:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.546 19:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.546 19:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.546 19:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.546 19:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.546 19:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.546 19:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.546 19:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.546 19:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:31.546 19:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:32.481 19:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:32.481 19:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.481 19:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:32.481 19:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.481 19:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:32.481 19:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.481 19:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:32.481 19:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.481 19:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:32.481 19:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.418 19:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.418 19:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.418 19:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.418 19:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.418 19:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.418 19:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.418 19:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.418 19:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.418 19:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:33.418 19:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.353 [2024-07-24 19:04:19.328501] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:34.353 [2024-07-24 19:04:19.328552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.353 [2024-07-24 19:04:19.328567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.353 [2024-07-24 19:04:19.328579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.353 [2024-07-24 19:04:19.328589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.353 [2024-07-24 19:04:19.328601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.353 [2024-07-24 19:04:19.328670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.353 [2024-07-24 19:04:19.328681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.353 [2024-07-24 19:04:19.328691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.353 [2024-07-24 19:04:19.328701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.353 [2024-07-24 19:04:19.328711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.353 [2024-07-24 19:04:19.328721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2dd20 is same with the state(6) to be set 00:27:34.353 [2024-07-24 19:04:19.338523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2dd20 (9): Bad file descriptor 00:27:34.353 19:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.353 19:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.353 19:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.353 19:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.353 19:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.353 19:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.353 19:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.353 [2024-07-24 19:04:19.348567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.730 [2024-07-24 19:04:20.393737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:35.730 [2024-07-24 19:04:20.393835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2dd20 with addr=10.0.0.2, port=4420 00:27:35.730 [2024-07-24 19:04:20.393870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2dd20 is same with the state(6) to be set 00:27:35.730 [2024-07-24 19:04:20.393931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2dd20 (9): Bad file descriptor 00:27:35.730 [2024-07-24 19:04:20.394079] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:35.730 [2024-07-24 19:04:20.394137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.730 [2024-07-24 19:04:20.394158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:35.730 [2024-07-24 19:04:20.394181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.730 [2024-07-24 19:04:20.394224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.730 [2024-07-24 19:04:20.394246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.730 19:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.730 19:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:35.730 19:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:36.665 [2024-07-24 19:04:21.396751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:36.665 [2024-07-24 19:04:21.396779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:36.665 [2024-07-24 19:04:21.396789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:36.665 [2024-07-24 19:04:21.396800] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:36.665 [2024-07-24 19:04:21.396824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.665 [2024-07-24 19:04:21.396850] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:36.665 [2024-07-24 19:04:21.396876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.665 [2024-07-24 19:04:21.396890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.665 [2024-07-24 19:04:21.396903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.665 [2024-07-24 19:04:21.396914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.665 [2024-07-24 19:04:21.396925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.665 [2024-07-24 19:04:21.396940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.665 [2024-07-24 19:04:21.396951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.665 [2024-07-24 19:04:21.396961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.665 [2024-07-24 19:04:21.396972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.665 [2024-07-24 19:04:21.396982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.665 [2024-07-24 19:04:21.396991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:36.665 [2024-07-24 19:04:21.397757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2d0f0 (9): Bad file descriptor 00:27:36.665 [2024-07-24 19:04:21.398769] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:36.665 [2024-07-24 19:04:21.398783] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:36.665 19:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:38.043 19:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.043 19:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.043 19:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.043 19:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.043 19:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.043 19:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.043 19:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.043 19:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.043 19:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:38.043 19:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:38.609 [2024-07-24 19:04:23.453311] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:38.609 [2024-07-24 19:04:23.453332] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:38.609 [2024-07-24 19:04:23.453350] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:38.609 [2024-07-24 19:04:23.582802] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.868 [2024-07-24 19:04:23.682677] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:38.868 [2024-07-24 19:04:23.682721] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:38.868 [2024-07-24 19:04:23.682745] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:38.868 [2024-07-24 19:04:23.682762] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:38.868 [2024-07-24 19:04:23.682771] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:38.868 [2024-07-24 19:04:23.690558] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb70990 was disconnected and freed. delete nvme_qpair. 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2646528 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2646528 ']' 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2646528 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2646528 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2646528' 00:27:38.868 killing process with pid 2646528 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2646528 00:27:38.868 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2646528 00:27:39.127 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:39.127 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:39.127 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:39.127 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:39.127 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:39.127 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:39.127 19:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:39.127 rmmod nvme_tcp 00:27:39.127 rmmod nvme_fabrics 00:27:39.127 rmmod nvme_keyring 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2646258 ']' 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2646258 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2646258 ']' 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2646258 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2646258 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2646258' 00:27:39.127 killing process with pid 2646258 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2646258 00:27:39.127 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2646258 00:27:39.386 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:39.386 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:39.386 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:39.386 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:39.386 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:39.386 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.386 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.386 19:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:41.925 00:27:41.925 real 0m21.273s 00:27:41.925 user 0m25.840s 00:27:41.925 sys 0m5.851s 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.925 ************************************ 00:27:41.925 END TEST nvmf_discovery_remove_ifc 00:27:41.925 ************************************ 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.925 ************************************ 00:27:41.925 START TEST nvmf_identify_kernel_target 00:27:41.925 ************************************ 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:41.925 * Looking for test storage... 00:27:41.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.925 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:41.926 19:04:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:47.240 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:47.240 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:47.240 Found net devices under 0000:af:00.0: cvl_0_0 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.240 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:47.241 Found net devices under 0000:af:00.1: cvl_0_1 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.241 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:47.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:27:47.499 00:27:47.499 --- 10.0.0.2 ping statistics --- 00:27:47.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.499 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:27:47.499 00:27:47.499 --- 10.0.0.1 ping statistics --- 00:27:47.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.499 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.499 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:47.500 19:04:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:50.034 Waiting for block devices as requested 00:27:50.294 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:50.294 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:50.553 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:50.553 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:50.553 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:50.553 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:50.811 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:50.811 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:50.811 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:51.069 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:51.069 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:51.069 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:51.328 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:51.328 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:51.328 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:51.328 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:51.586 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:51.586 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:51.586 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:51.586 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:51.586 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:51.587 No valid GPT data, bailing 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:51.587 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:51.846 00:27:51.846 Discovery Log Number of Records 2, Generation counter 2 00:27:51.846 =====Discovery Log Entry 0====== 00:27:51.846 trtype: tcp 00:27:51.846 adrfam: ipv4 00:27:51.846 subtype: current discovery subsystem 00:27:51.846 treq: not specified, sq flow control disable supported 00:27:51.846 portid: 1 00:27:51.846 trsvcid: 4420 00:27:51.846 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:51.846 traddr: 10.0.0.1 00:27:51.846 eflags: none 00:27:51.846 sectype: none 00:27:51.846 =====Discovery Log Entry 1====== 00:27:51.846 trtype: tcp 00:27:51.846 adrfam: ipv4 00:27:51.846 subtype: nvme subsystem 00:27:51.846 treq: not specified, sq flow control disable supported 00:27:51.846 portid: 1 00:27:51.846 trsvcid: 4420 00:27:51.846 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:51.846 traddr: 10.0.0.1 00:27:51.846 eflags: none 00:27:51.846 sectype: none 00:27:51.846 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:51.846 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:51.846 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.846 ===================================================== 00:27:51.846 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:51.846 ===================================================== 00:27:51.846 Controller Capabilities/Features 00:27:51.846 ================================ 00:27:51.846 Vendor ID: 0000 00:27:51.846 Subsystem Vendor ID: 0000 00:27:51.846 Serial Number: 776a910d723a23150b12 00:27:51.846 Model Number: Linux 00:27:51.846 Firmware Version: 6.7.0-68 00:27:51.846 Recommended Arb Burst: 0 00:27:51.846 IEEE OUI Identifier: 00 00 00 00:27:51.846 Multi-path I/O 00:27:51.846 May have multiple subsystem ports: No 00:27:51.846 May have multiple controllers: No 00:27:51.846 Associated with SR-IOV VF: No 00:27:51.846 Max Data Transfer Size: Unlimited 00:27:51.846 Max Number of Namespaces: 0 00:27:51.846 Max Number of I/O Queues: 1024 00:27:51.846 NVMe Specification Version (VS): 1.3 00:27:51.846 NVMe Specification Version (Identify): 1.3 00:27:51.846 Maximum Queue Entries: 1024 00:27:51.846 Contiguous Queues Required: No 00:27:51.846 Arbitration Mechanisms Supported 00:27:51.846 Weighted Round Robin: Not Supported 00:27:51.846 Vendor Specific: Not Supported 00:27:51.846 Reset Timeout: 7500 ms 00:27:51.846 Doorbell Stride: 4 bytes 00:27:51.846 NVM Subsystem Reset: Not Supported 00:27:51.846 Command Sets Supported 00:27:51.846 NVM Command Set: Supported 00:27:51.846 Boot Partition: Not Supported 00:27:51.846 Memory Page Size Minimum: 4096 bytes 00:27:51.846 Memory Page Size Maximum: 4096 bytes 00:27:51.846 Persistent Memory Region: Not Supported 00:27:51.846 Optional Asynchronous Events Supported 00:27:51.846 Namespace Attribute Notices: Not Supported 00:27:51.846 Firmware Activation Notices: Not Supported 00:27:51.846 ANA Change Notices: Not Supported 00:27:51.846 PLE Aggregate Log Change Notices: Not Supported 00:27:51.846 LBA Status Info Alert Notices: Not Supported 00:27:51.847 EGE Aggregate Log Change Notices: Not Supported 00:27:51.847 Normal NVM Subsystem Shutdown event: Not Supported 00:27:51.847 Zone Descriptor Change Notices: Not Supported 00:27:51.847 Discovery Log Change Notices: Supported 00:27:51.847 Controller Attributes 00:27:51.847 128-bit Host Identifier: Not Supported 00:27:51.847 Non-Operational Permissive Mode: Not Supported 00:27:51.847 NVM Sets: Not Supported 00:27:51.847 Read Recovery Levels: Not Supported 00:27:51.847 Endurance Groups: Not Supported 00:27:51.847 Predictable Latency Mode: Not Supported 00:27:51.847 Traffic Based Keep ALive: Not Supported 00:27:51.847 Namespace Granularity: Not Supported 00:27:51.847 SQ Associations: Not Supported 00:27:51.847 UUID List: Not Supported 00:27:51.847 Multi-Domain Subsystem: Not Supported 00:27:51.847 Fixed Capacity Management: Not Supported 00:27:51.847 Variable Capacity Management: Not Supported 00:27:51.847 Delete Endurance Group: Not Supported 00:27:51.847 Delete NVM Set: Not Supported 00:27:51.847 Extended LBA Formats Supported: Not Supported 00:27:51.847 Flexible Data Placement Supported: Not Supported 00:27:51.847 00:27:51.847 Controller Memory Buffer Support 00:27:51.847 ================================ 00:27:51.847 Supported: No 00:27:51.847 00:27:51.847 Persistent Memory Region Support 00:27:51.847 ================================ 00:27:51.847 Supported: No 00:27:51.847 00:27:51.847 Admin Command Set Attributes 00:27:51.847 ============================ 00:27:51.847 Security Send/Receive: Not Supported 00:27:51.847 Format NVM: Not Supported 00:27:51.847 Firmware Activate/Download: Not Supported 00:27:51.847 Namespace Management: Not Supported 00:27:51.847 Device Self-Test: Not Supported 00:27:51.847 Directives: Not Supported 00:27:51.847 NVMe-MI: Not Supported 00:27:51.847 Virtualization Management: Not Supported 00:27:51.847 Doorbell Buffer Config: Not Supported 00:27:51.847 Get LBA Status Capability: Not Supported 00:27:51.847 Command & Feature Lockdown Capability: Not Supported 00:27:51.847 Abort Command Limit: 1 00:27:51.847 Async Event Request Limit: 1 00:27:51.847 Number of Firmware Slots: N/A 00:27:51.847 Firmware Slot 1 Read-Only: N/A 00:27:51.847 Firmware Activation Without Reset: N/A 00:27:51.847 Multiple Update Detection Support: N/A 00:27:51.847 Firmware Update Granularity: No Information Provided 00:27:51.847 Per-Namespace SMART Log: No 00:27:51.847 Asymmetric Namespace Access Log Page: Not Supported 00:27:51.847 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:51.847 Command Effects Log Page: Not Supported 00:27:51.847 Get Log Page Extended Data: Supported 00:27:51.847 Telemetry Log Pages: Not Supported 00:27:51.847 Persistent Event Log Pages: Not Supported 00:27:51.847 Supported Log Pages Log Page: May Support 00:27:51.847 Commands Supported & Effects Log Page: Not Supported 00:27:51.847 Feature Identifiers & Effects Log Page:May Support 00:27:51.847 NVMe-MI Commands & Effects Log Page: May Support 00:27:51.847 Data Area 4 for Telemetry Log: Not Supported 00:27:51.847 Error Log Page Entries Supported: 1 00:27:51.847 Keep Alive: Not Supported 00:27:51.847 00:27:51.847 NVM Command Set Attributes 00:27:51.847 ========================== 00:27:51.847 Submission Queue Entry Size 00:27:51.847 Max: 1 00:27:51.847 Min: 1 00:27:51.847 Completion Queue Entry Size 00:27:51.847 Max: 1 00:27:51.847 Min: 1 00:27:51.847 Number of Namespaces: 0 00:27:51.847 Compare Command: Not Supported 00:27:51.847 Write Uncorrectable Command: Not Supported 00:27:51.847 Dataset Management Command: Not Supported 00:27:51.847 Write Zeroes Command: Not Supported 00:27:51.847 Set Features Save Field: Not Supported 00:27:51.847 Reservations: Not Supported 00:27:51.847 Timestamp: Not Supported 00:27:51.847 Copy: Not Supported 00:27:51.847 Volatile Write Cache: Not Present 00:27:51.847 Atomic Write Unit (Normal): 1 00:27:51.847 Atomic Write Unit (PFail): 1 00:27:51.847 Atomic Compare & Write Unit: 1 00:27:51.847 Fused Compare & Write: Not Supported 00:27:51.847 Scatter-Gather List 00:27:51.847 SGL Command Set: Supported 00:27:51.847 SGL Keyed: Not Supported 00:27:51.847 SGL Bit Bucket Descriptor: Not Supported 00:27:51.847 SGL Metadata Pointer: Not Supported 00:27:51.847 Oversized SGL: Not Supported 00:27:51.847 SGL Metadata Address: Not Supported 00:27:51.847 SGL Offset: Supported 00:27:51.847 Transport SGL Data Block: Not Supported 00:27:51.847 Replay Protected Memory Block: Not Supported 00:27:51.847 00:27:51.847 Firmware Slot Information 00:27:51.847 ========================= 00:27:51.847 Active slot: 0 00:27:51.847 00:27:51.847 00:27:51.847 Error Log 00:27:51.847 ========= 00:27:51.847 00:27:51.847 Active Namespaces 00:27:51.847 ================= 00:27:51.847 Discovery Log Page 00:27:51.847 ================== 00:27:51.847 Generation Counter: 2 00:27:51.847 Number of Records: 2 00:27:51.847 Record Format: 0 00:27:51.847 00:27:51.847 Discovery Log Entry 0 00:27:51.847 ---------------------- 00:27:51.847 Transport Type: 3 (TCP) 00:27:51.847 Address Family: 1 (IPv4) 00:27:51.847 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:51.847 Entry Flags: 00:27:51.847 Duplicate Returned Information: 0 00:27:51.847 Explicit Persistent Connection Support for Discovery: 0 00:27:51.847 Transport Requirements: 00:27:51.847 Secure Channel: Not Specified 00:27:51.847 Port ID: 1 (0x0001) 00:27:51.847 Controller ID: 65535 (0xffff) 00:27:51.847 Admin Max SQ Size: 32 00:27:51.847 Transport Service Identifier: 4420 00:27:51.847 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:51.847 Transport Address: 10.0.0.1 00:27:51.847 Discovery Log Entry 1 00:27:51.847 ---------------------- 00:27:51.847 Transport Type: 3 (TCP) 00:27:51.847 Address Family: 1 (IPv4) 00:27:51.847 Subsystem Type: 2 (NVM Subsystem) 00:27:51.847 Entry Flags: 00:27:51.847 Duplicate Returned Information: 0 00:27:51.847 Explicit Persistent Connection Support for Discovery: 0 00:27:51.847 Transport Requirements: 00:27:51.847 Secure Channel: Not Specified 00:27:51.847 Port ID: 1 (0x0001) 00:27:51.847 Controller ID: 65535 (0xffff) 00:27:51.847 Admin Max SQ Size: 32 00:27:51.847 Transport Service Identifier: 4420 00:27:51.847 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:51.847 Transport Address: 10.0.0.1 00:27:51.847 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:51.847 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.107 get_feature(0x01) failed 00:27:52.107 get_feature(0x02) failed 00:27:52.107 get_feature(0x04) failed 00:27:52.107 ===================================================== 00:27:52.107 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:52.107 ===================================================== 00:27:52.107 Controller Capabilities/Features 00:27:52.107 ================================ 00:27:52.107 Vendor ID: 0000 00:27:52.107 Subsystem Vendor ID: 0000 00:27:52.107 Serial Number: 556f10d98f2b5226e7f3 00:27:52.107 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:52.107 Firmware Version: 6.7.0-68 00:27:52.107 Recommended Arb Burst: 6 00:27:52.107 IEEE OUI Identifier: 00 00 00 00:27:52.107 Multi-path I/O 00:27:52.107 May have multiple subsystem ports: Yes 00:27:52.107 May have multiple controllers: Yes 00:27:52.107 Associated with SR-IOV VF: No 00:27:52.107 Max Data Transfer Size: Unlimited 00:27:52.107 Max Number of Namespaces: 1024 00:27:52.107 Max Number of I/O Queues: 128 00:27:52.107 NVMe Specification Version (VS): 1.3 00:27:52.107 NVMe Specification Version (Identify): 1.3 00:27:52.107 Maximum Queue Entries: 1024 00:27:52.107 Contiguous Queues Required: No 00:27:52.107 Arbitration Mechanisms Supported 00:27:52.107 Weighted Round Robin: Not Supported 00:27:52.107 Vendor Specific: Not Supported 00:27:52.107 Reset Timeout: 7500 ms 00:27:52.107 Doorbell Stride: 4 bytes 00:27:52.107 NVM Subsystem Reset: Not Supported 00:27:52.107 Command Sets Supported 00:27:52.107 NVM Command Set: Supported 00:27:52.107 Boot Partition: Not Supported 00:27:52.107 Memory Page Size Minimum: 4096 bytes 00:27:52.107 Memory Page Size Maximum: 4096 bytes 00:27:52.107 Persistent Memory Region: Not Supported 00:27:52.107 Optional Asynchronous Events Supported 00:27:52.107 Namespace Attribute Notices: Supported 00:27:52.107 Firmware Activation Notices: Not Supported 00:27:52.107 ANA Change Notices: Supported 00:27:52.107 PLE Aggregate Log Change Notices: Not Supported 00:27:52.107 LBA Status Info Alert Notices: Not Supported 00:27:52.107 EGE Aggregate Log Change Notices: Not Supported 00:27:52.107 Normal NVM Subsystem Shutdown event: Not Supported 00:27:52.107 Zone Descriptor Change Notices: Not Supported 00:27:52.107 Discovery Log Change Notices: Not Supported 00:27:52.107 Controller Attributes 00:27:52.107 128-bit Host Identifier: Supported 00:27:52.107 Non-Operational Permissive Mode: Not Supported 00:27:52.107 NVM Sets: Not Supported 00:27:52.107 Read Recovery Levels: Not Supported 00:27:52.107 Endurance Groups: Not Supported 00:27:52.107 Predictable Latency Mode: Not Supported 00:27:52.107 Traffic Based Keep ALive: Supported 00:27:52.107 Namespace Granularity: Not Supported 00:27:52.107 SQ Associations: Not Supported 00:27:52.107 UUID List: Not Supported 00:27:52.107 Multi-Domain Subsystem: Not Supported 00:27:52.107 Fixed Capacity Management: Not Supported 00:27:52.107 Variable Capacity Management: Not Supported 00:27:52.107 Delete Endurance Group: Not Supported 00:27:52.107 Delete NVM Set: Not Supported 00:27:52.107 Extended LBA Formats Supported: Not Supported 00:27:52.107 Flexible Data Placement Supported: Not Supported 00:27:52.107 00:27:52.107 Controller Memory Buffer Support 00:27:52.107 ================================ 00:27:52.107 Supported: No 00:27:52.107 00:27:52.107 Persistent Memory Region Support 00:27:52.107 ================================ 00:27:52.107 Supported: No 00:27:52.107 00:27:52.107 Admin Command Set Attributes 00:27:52.107 ============================ 00:27:52.107 Security Send/Receive: Not Supported 00:27:52.107 Format NVM: Not Supported 00:27:52.107 Firmware Activate/Download: Not Supported 00:27:52.107 Namespace Management: Not Supported 00:27:52.107 Device Self-Test: Not Supported 00:27:52.107 Directives: Not Supported 00:27:52.107 NVMe-MI: Not Supported 00:27:52.107 Virtualization Management: Not Supported 00:27:52.107 Doorbell Buffer Config: Not Supported 00:27:52.107 Get LBA Status Capability: Not Supported 00:27:52.107 Command & Feature Lockdown Capability: Not Supported 00:27:52.107 Abort Command Limit: 4 00:27:52.107 Async Event Request Limit: 4 00:27:52.107 Number of Firmware Slots: N/A 00:27:52.107 Firmware Slot 1 Read-Only: N/A 00:27:52.107 Firmware Activation Without Reset: N/A 00:27:52.107 Multiple Update Detection Support: N/A 00:27:52.107 Firmware Update Granularity: No Information Provided 00:27:52.107 Per-Namespace SMART Log: Yes 00:27:52.107 Asymmetric Namespace Access Log Page: Supported 00:27:52.107 ANA Transition Time : 10 sec 00:27:52.107 00:27:52.107 Asymmetric Namespace Access Capabilities 00:27:52.107 ANA Optimized State : Supported 00:27:52.107 ANA Non-Optimized State : Supported 00:27:52.107 ANA Inaccessible State : Supported 00:27:52.107 ANA Persistent Loss State : Supported 00:27:52.107 ANA Change State : Supported 00:27:52.107 ANAGRPID is not changed : No 00:27:52.107 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:52.107 00:27:52.107 ANA Group Identifier Maximum : 128 00:27:52.107 Number of ANA Group Identifiers : 128 00:27:52.107 Max Number of Allowed Namespaces : 1024 00:27:52.107 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:52.107 Command Effects Log Page: Supported 00:27:52.107 Get Log Page Extended Data: Supported 00:27:52.107 Telemetry Log Pages: Not Supported 00:27:52.107 Persistent Event Log Pages: Not Supported 00:27:52.107 Supported Log Pages Log Page: May Support 00:27:52.107 Commands Supported & Effects Log Page: Not Supported 00:27:52.107 Feature Identifiers & Effects Log Page:May Support 00:27:52.107 NVMe-MI Commands & Effects Log Page: May Support 00:27:52.107 Data Area 4 for Telemetry Log: Not Supported 00:27:52.107 Error Log Page Entries Supported: 128 00:27:52.107 Keep Alive: Supported 00:27:52.107 Keep Alive Granularity: 1000 ms 00:27:52.107 00:27:52.107 NVM Command Set Attributes 00:27:52.108 ========================== 00:27:52.108 Submission Queue Entry Size 00:27:52.108 Max: 64 00:27:52.108 Min: 64 00:27:52.108 Completion Queue Entry Size 00:27:52.108 Max: 16 00:27:52.108 Min: 16 00:27:52.108 Number of Namespaces: 1024 00:27:52.108 Compare Command: Not Supported 00:27:52.108 Write Uncorrectable Command: Not Supported 00:27:52.108 Dataset Management Command: Supported 00:27:52.108 Write Zeroes Command: Supported 00:27:52.108 Set Features Save Field: Not Supported 00:27:52.108 Reservations: Not Supported 00:27:52.108 Timestamp: Not Supported 00:27:52.108 Copy: Not Supported 00:27:52.108 Volatile Write Cache: Present 00:27:52.108 Atomic Write Unit (Normal): 1 00:27:52.108 Atomic Write Unit (PFail): 1 00:27:52.108 Atomic Compare & Write Unit: 1 00:27:52.108 Fused Compare & Write: Not Supported 00:27:52.108 Scatter-Gather List 00:27:52.108 SGL Command Set: Supported 00:27:52.108 SGL Keyed: Not Supported 00:27:52.108 SGL Bit Bucket Descriptor: Not Supported 00:27:52.108 SGL Metadata Pointer: Not Supported 00:27:52.108 Oversized SGL: Not Supported 00:27:52.108 SGL Metadata Address: Not Supported 00:27:52.108 SGL Offset: Supported 00:27:52.108 Transport SGL Data Block: Not Supported 00:27:52.108 Replay Protected Memory Block: Not Supported 00:27:52.108 00:27:52.108 Firmware Slot Information 00:27:52.108 ========================= 00:27:52.108 Active slot: 0 00:27:52.108 00:27:52.108 Asymmetric Namespace Access 00:27:52.108 =========================== 00:27:52.108 Change Count : 0 00:27:52.108 Number of ANA Group Descriptors : 1 00:27:52.108 ANA Group Descriptor : 0 00:27:52.108 ANA Group ID : 1 00:27:52.108 Number of NSID Values : 1 00:27:52.108 Change Count : 0 00:27:52.108 ANA State : 1 00:27:52.108 Namespace Identifier : 1 00:27:52.108 00:27:52.108 Commands Supported and Effects 00:27:52.108 ============================== 00:27:52.108 Admin Commands 00:27:52.108 -------------- 00:27:52.108 Get Log Page (02h): Supported 00:27:52.108 Identify (06h): Supported 00:27:52.108 Abort (08h): Supported 00:27:52.108 Set Features (09h): Supported 00:27:52.108 Get Features (0Ah): Supported 00:27:52.108 Asynchronous Event Request (0Ch): Supported 00:27:52.108 Keep Alive (18h): Supported 00:27:52.108 I/O Commands 00:27:52.108 ------------ 00:27:52.108 Flush (00h): Supported 00:27:52.108 Write (01h): Supported LBA-Change 00:27:52.108 Read (02h): Supported 00:27:52.108 Write Zeroes (08h): Supported LBA-Change 00:27:52.108 Dataset Management (09h): Supported 00:27:52.108 00:27:52.108 Error Log 00:27:52.108 ========= 00:27:52.108 Entry: 0 00:27:52.108 Error Count: 0x3 00:27:52.108 Submission Queue Id: 0x0 00:27:52.108 Command Id: 0x5 00:27:52.108 Phase Bit: 0 00:27:52.108 Status Code: 0x2 00:27:52.108 Status Code Type: 0x0 00:27:52.108 Do Not Retry: 1 00:27:52.108 Error Location: 0x28 00:27:52.108 LBA: 0x0 00:27:52.108 Namespace: 0x0 00:27:52.108 Vendor Log Page: 0x0 00:27:52.108 ----------- 00:27:52.108 Entry: 1 00:27:52.108 Error Count: 0x2 00:27:52.108 Submission Queue Id: 0x0 00:27:52.108 Command Id: 0x5 00:27:52.108 Phase Bit: 0 00:27:52.108 Status Code: 0x2 00:27:52.108 Status Code Type: 0x0 00:27:52.108 Do Not Retry: 1 00:27:52.108 Error Location: 0x28 00:27:52.108 LBA: 0x0 00:27:52.108 Namespace: 0x0 00:27:52.108 Vendor Log Page: 0x0 00:27:52.108 ----------- 00:27:52.108 Entry: 2 00:27:52.108 Error Count: 0x1 00:27:52.108 Submission Queue Id: 0x0 00:27:52.108 Command Id: 0x4 00:27:52.108 Phase Bit: 0 00:27:52.108 Status Code: 0x2 00:27:52.108 Status Code Type: 0x0 00:27:52.108 Do Not Retry: 1 00:27:52.108 Error Location: 0x28 00:27:52.108 LBA: 0x0 00:27:52.108 Namespace: 0x0 00:27:52.108 Vendor Log Page: 0x0 00:27:52.108 00:27:52.108 Number of Queues 00:27:52.108 ================ 00:27:52.108 Number of I/O Submission Queues: 128 00:27:52.108 Number of I/O Completion Queues: 128 00:27:52.108 00:27:52.108 ZNS Specific Controller Data 00:27:52.108 ============================ 00:27:52.108 Zone Append Size Limit: 0 00:27:52.108 00:27:52.108 00:27:52.108 Active Namespaces 00:27:52.108 ================= 00:27:52.108 get_feature(0x05) failed 00:27:52.108 Namespace ID:1 00:27:52.108 Command Set Identifier: NVM (00h) 00:27:52.108 Deallocate: Supported 00:27:52.108 Deallocated/Unwritten Error: Not Supported 00:27:52.108 Deallocated Read Value: Unknown 00:27:52.108 Deallocate in Write Zeroes: Not Supported 00:27:52.108 Deallocated Guard Field: 0xFFFF 00:27:52.108 Flush: Supported 00:27:52.108 Reservation: Not Supported 00:27:52.108 Namespace Sharing Capabilities: Multiple Controllers 00:27:52.108 Size (in LBAs): 1953525168 (931GiB) 00:27:52.108 Capacity (in LBAs): 1953525168 (931GiB) 00:27:52.108 Utilization (in LBAs): 1953525168 (931GiB) 00:27:52.108 UUID: e2bd3717-a5ea-4678-a005-54fc8d0457fb 00:27:52.108 Thin Provisioning: Not Supported 00:27:52.108 Per-NS Atomic Units: Yes 00:27:52.108 Atomic Boundary Size (Normal): 0 00:27:52.108 Atomic Boundary Size (PFail): 0 00:27:52.108 Atomic Boundary Offset: 0 00:27:52.108 NGUID/EUI64 Never Reused: No 00:27:52.108 ANA group ID: 1 00:27:52.108 Namespace Write Protected: No 00:27:52.108 Number of LBA Formats: 1 00:27:52.108 Current LBA Format: LBA Format #00 00:27:52.108 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:52.108 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:52.108 rmmod nvme_tcp 00:27:52.108 rmmod nvme_fabrics 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.108 19:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.012 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:54.271 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:54.271 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:54.271 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:54.271 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:54.271 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:54.271 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:54.271 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:54.271 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:54.271 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:54.271 19:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:57.559 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:57.559 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:58.127 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:27:58.127 00:27:58.127 real 0m16.547s 00:27:58.127 user 0m4.040s 00:27:58.127 sys 0m8.694s 00:27:58.127 19:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:58.127 19:04:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.127 ************************************ 00:27:58.127 END TEST nvmf_identify_kernel_target 00:27:58.127 ************************************ 00:27:58.127 19:04:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:58.127 19:04:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:58.127 19:04:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:58.127 19:04:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.127 ************************************ 00:27:58.127 START TEST nvmf_auth_host 00:27:58.127 ************************************ 00:27:58.127 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:58.386 * Looking for test storage... 00:27:58.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.386 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:58.387 19:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:04.957 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:04.957 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.957 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:04.958 Found net devices under 0000:af:00.0: cvl_0_0 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:04.958 Found net devices under 0000:af:00.1: cvl_0_1 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:04.958 19:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:04.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:28:04.958 00:28:04.958 --- 10.0.0.2 ping statistics --- 00:28:04.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.958 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:28:04.958 00:28:04.958 --- 10.0.0.1 ping statistics --- 00:28:04.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.958 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2658620 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2658620 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2658620 ']' 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3960c936a528623eb7b1d444ee32db1d 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QR3 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3960c936a528623eb7b1d444ee32db1d 0 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3960c936a528623eb7b1d444ee32db1d 0 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3960c936a528623eb7b1d444ee32db1d 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QR3 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QR3 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.QR3 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:04.958 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=75905d129bf323b222fa2fd37e13b39e026ea3f5c7b629dbf654d9b4f7c2a6fa 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6PT 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 75905d129bf323b222fa2fd37e13b39e026ea3f5c7b629dbf654d9b4f7c2a6fa 3 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 75905d129bf323b222fa2fd37e13b39e026ea3f5c7b629dbf654d9b4f7c2a6fa 3 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=75905d129bf323b222fa2fd37e13b39e026ea3f5c7b629dbf654d9b4f7c2a6fa 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6PT 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6PT 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6PT 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a9a4a0efbf36de5379420afcbd818f5a36ea70b7357c50cc 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Gid 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a9a4a0efbf36de5379420afcbd818f5a36ea70b7357c50cc 0 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a9a4a0efbf36de5379420afcbd818f5a36ea70b7357c50cc 0 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a9a4a0efbf36de5379420afcbd818f5a36ea70b7357c50cc 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Gid 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Gid 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Gid 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=57bd54f276b36cd2c46ad928856390e04099d8efda8bf875 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.LV3 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 57bd54f276b36cd2c46ad928856390e04099d8efda8bf875 2 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 57bd54f276b36cd2c46ad928856390e04099d8efda8bf875 2 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=57bd54f276b36cd2c46ad928856390e04099d8efda8bf875 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.LV3 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.LV3 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.LV3 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=61e7d7c197d3a4ef070afa903e7a3db8 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CXh 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 61e7d7c197d3a4ef070afa903e7a3db8 1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 61e7d7c197d3a4ef070afa903e7a3db8 1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=61e7d7c197d3a4ef070afa903e7a3db8 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CXh 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CXh 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.CXh 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=944988ab7c6a696c46804eec39ce776b 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6Ld 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 944988ab7c6a696c46804eec39ce776b 1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 944988ab7c6a696c46804eec39ce776b 1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=944988ab7c6a696c46804eec39ce776b 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6Ld 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6Ld 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6Ld 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=11c6c80b3e2a4ae76acbdc7a4dbb96268c39129603c8850e 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9XC 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 11c6c80b3e2a4ae76acbdc7a4dbb96268c39129603c8850e 2 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 11c6c80b3e2a4ae76acbdc7a4dbb96268c39129603c8850e 2 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=11c6c80b3e2a4ae76acbdc7a4dbb96268c39129603c8850e 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:04.959 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.219 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9XC 00:28:05.219 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9XC 00:28:05.219 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9XC 00:28:05.219 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:05.219 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.219 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.219 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.219 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:05.219 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:05.219 19:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=34dbd892ff5079314f2fa1a8f3b2ffa9 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LPh 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 34dbd892ff5079314f2fa1a8f3b2ffa9 0 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 34dbd892ff5079314f2fa1a8f3b2ffa9 0 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=34dbd892ff5079314f2fa1a8f3b2ffa9 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LPh 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LPh 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.LPh 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8ac3cb6028bd8d2679b8a5c4ca573090092bb40422c30d0429a1db83536966e7 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.joQ 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8ac3cb6028bd8d2679b8a5c4ca573090092bb40422c30d0429a1db83536966e7 3 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8ac3cb6028bd8d2679b8a5c4ca573090092bb40422c30d0429a1db83536966e7 3 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8ac3cb6028bd8d2679b8a5c4ca573090092bb40422c30d0429a1db83536966e7 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.joQ 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.joQ 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.joQ 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2658620 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2658620 ']' 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.219 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QR3 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6PT ]] 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6PT 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Gid 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.LV3 ]] 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LV3 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.CXh 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6Ld ]] 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Ld 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9XC 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.478 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.LPh ]] 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.LPh 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.joQ 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:05.479 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:05.737 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:05.737 19:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:08.271 Waiting for block devices as requested 00:28:08.271 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:28:08.530 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:08.530 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:08.530 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:08.789 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:08.789 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:08.789 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:08.789 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:09.048 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:09.048 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:09.048 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:09.306 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:09.306 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:09.306 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:09.306 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:09.565 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:09.565 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:10.132 No valid GPT data, bailing 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:10.132 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:10.392 00:28:10.392 Discovery Log Number of Records 2, Generation counter 2 00:28:10.392 =====Discovery Log Entry 0====== 00:28:10.392 trtype: tcp 00:28:10.392 adrfam: ipv4 00:28:10.392 subtype: current discovery subsystem 00:28:10.392 treq: not specified, sq flow control disable supported 00:28:10.392 portid: 1 00:28:10.392 trsvcid: 4420 00:28:10.392 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:10.392 traddr: 10.0.0.1 00:28:10.392 eflags: none 00:28:10.392 sectype: none 00:28:10.392 =====Discovery Log Entry 1====== 00:28:10.392 trtype: tcp 00:28:10.392 adrfam: ipv4 00:28:10.392 subtype: nvme subsystem 00:28:10.392 treq: not specified, sq flow control disable supported 00:28:10.392 portid: 1 00:28:10.392 trsvcid: 4420 00:28:10.392 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:10.392 traddr: 10.0.0.1 00:28:10.392 eflags: none 00:28:10.392 sectype: none 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:10.392 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.393 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.652 nvme0n1 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.652 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.653 nvme0n1 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.653 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.912 nvme0n1 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.912 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.171 19:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.171 nvme0n1 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:11.171 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.430 nvme0n1 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:11.430 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.431 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.689 nvme0n1 00:28:11.689 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.689 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.689 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.689 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.689 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.690 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.949 nvme0n1 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.949 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.950 19:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.209 nvme0n1 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.209 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.468 nvme0n1 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.468 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.727 nvme0n1 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.727 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.986 nvme0n1 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.986 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.987 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.987 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:12.987 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.987 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.245 19:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.245 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.245 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.245 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.245 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.505 nvme0n1 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.505 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.764 nvme0n1 00:28:13.764 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.765 19:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.024 nvme0n1 00:28:14.024 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.024 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.024 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.024 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.024 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.024 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.284 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.543 nvme0n1 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.543 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.544 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.544 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.544 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.544 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.544 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.803 nvme0n1 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.803 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.062 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.062 19:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.321 nvme0n1 00:28:15.321 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.321 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.321 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.321 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.321 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.321 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.321 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.321 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.321 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.321 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.616 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.875 nvme0n1 00:28:15.875 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.875 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.875 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.875 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.875 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.875 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.875 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.875 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.875 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.134 19:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.393 nvme0n1 00:28:16.393 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.393 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.393 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.393 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.393 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.652 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.653 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.653 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.653 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.220 nvme0n1 00:28:17.220 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.220 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.220 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.220 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.220 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.220 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.220 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.220 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.220 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.220 19:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.220 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.787 nvme0n1 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.788 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.722 nvme0n1 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.722 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.723 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.723 19:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.289 nvme0n1 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.289 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.546 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.546 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.546 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.546 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.547 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.547 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.547 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.547 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.547 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.547 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.547 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.547 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.547 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.114 nvme0n1 00:28:20.114 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.114 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.114 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.114 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.114 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.114 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:20.372 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.373 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.939 nvme0n1 00:28:20.939 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.939 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.939 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.939 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.939 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.939 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.198 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.767 nvme0n1 00:28:21.767 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.767 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.767 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.767 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.767 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.767 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.026 nvme0n1 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.026 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.026 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.026 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.026 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.026 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.286 nvme0n1 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.286 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:22.287 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.287 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.287 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.287 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.287 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.287 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.287 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.287 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.546 nvme0n1 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:22.546 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.547 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.806 nvme0n1 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.806 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.065 nvme0n1 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.065 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.066 19:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.325 nvme0n1 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.325 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.584 nvme0n1 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:23.584 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.585 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.844 nvme0n1 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.844 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.104 nvme0n1 00:28:24.104 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.104 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.104 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.104 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.104 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.104 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.104 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.363 nvme0n1 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.363 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.364 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.623 nvme0n1 00:28:24.623 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.623 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.623 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.623 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.623 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.623 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.882 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.142 nvme0n1 00:28:25.142 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.142 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.142 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.142 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.142 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.142 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.142 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.401 nvme0n1 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.401 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.660 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.660 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.660 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.660 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.660 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.661 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.661 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.661 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:25.661 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.661 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.919 nvme0n1 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.919 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.920 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.179 nvme0n1 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.179 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.746 nvme0n1 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.746 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.314 nvme0n1 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.314 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.909 nvme0n1 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.909 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.476 nvme0n1 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.476 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.477 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.735 nvme0n1 00:28:28.735 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.735 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.735 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.735 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.735 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.735 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.994 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.994 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.994 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.994 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.994 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.994 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.994 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.994 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:28.994 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.994 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.995 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.563 nvme0n1 00:28:29.563 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.563 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.563 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.563 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.563 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.822 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.759 nvme0n1 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.759 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.760 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.328 nvme0n1 00:28:31.328 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.328 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.328 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.328 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.328 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.328 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.587 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.588 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.526 nvme0n1 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.526 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:32.527 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.527 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.094 nvme0n1 00:28:33.094 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.094 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.094 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.094 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.094 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.094 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.354 nvme0n1 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.354 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.614 nvme0n1 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.614 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.615 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.874 nvme0n1 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.874 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.875 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.133 nvme0n1 00:28:34.133 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.133 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.133 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.134 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.393 nvme0n1 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.393 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.394 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.394 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.394 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.394 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.394 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.394 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.394 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.394 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.394 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.653 nvme0n1 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.653 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.912 nvme0n1 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.912 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.172 nvme0n1 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.172 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.432 nvme0n1 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.432 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.691 nvme0n1 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:35.691 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.692 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.260 nvme0n1 00:28:36.260 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.260 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.260 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.260 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.260 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.260 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.260 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.261 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.520 nvme0n1 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.520 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.521 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.521 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.521 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.521 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.521 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.521 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.780 nvme0n1 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.780 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.781 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.781 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:36.781 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.781 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.040 nvme0n1 00:28:37.040 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.040 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.040 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.040 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.040 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.298 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.299 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.558 nvme0n1 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.558 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.127 nvme0n1 00:28:38.127 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.127 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.127 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.127 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.127 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.127 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.127 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.127 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.127 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.127 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.127 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.694 nvme0n1 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.694 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.695 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.695 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.695 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.695 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.695 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.695 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.695 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.695 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.262 nvme0n1 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.262 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.864 nvme0n1 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.864 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.865 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.865 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.865 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.432 nvme0n1 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk2MGM5MzZhNTI4NjIzZWI3YjFkNDQ0ZWUzMmRiMWStlSHu: 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: ]] 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzU5MDVkMTI5YmYzMjNiMjIyZmEyZmQzN2UxM2IzOWUwMjZlYTNmNWM3YjYyOWRiZjY1NGQ5YjRmN2MyYTZmYU//TW0=: 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.432 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.433 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.433 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.433 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.433 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.433 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.433 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.433 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.369 nvme0n1 00:28:41.369 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.369 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.369 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.369 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.369 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.369 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.369 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.370 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.937 nvme0n1 00:28:41.938 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.938 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.938 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.938 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.938 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.938 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.938 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.938 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.938 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.938 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjFlN2Q3YzE5N2QzYTRlZjA3MGFmYTkwM2U3YTNkYjiOmOlP: 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: ]] 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0OTg4YWI3YzZhNjk2YzQ2ODA0ZWVjMzljZTc3NmJInyMt: 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.197 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.765 nvme0n1 00:28:42.765 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFjNmM4MGIzZTJhNGFlNzZhY2JkYzdhNGRiYjk2MjY4YzM5MTI5NjAzYzg4NTBlX6b26w==: 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: ]] 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzRkYmQ4OTJmZjUwNzkzMTRmMmZhMWE4ZjNiMmZmYTm1ndoK: 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.024 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.959 nvme0n1 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGFjM2NiNjAyOGJkOGQyNjc5YjhhNWM0Y2E1NzMwOTAwOTJiYjQwNDIyYzMwZDA0MjlhMWRiODM1MzY5NjZlNzA9lF0=: 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.959 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.960 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.527 nvme0n1 00:28:44.527 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.527 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.527 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.527 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.527 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.527 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTlhNGEwZWZiZjM2ZGU1Mzc5NDIwYWZjYmQ4MThmNWEzNmVhNzBiNzM1N2M1MGNjpUkvhw==: 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: ]] 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdiZDU0ZjI3NmIzNmNkMmM0NmFkOTI4ODU2MzkwZTA0MDk5ZDhlZmRhOGJmODc1etxxAg==: 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.786 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.786 request: 00:28:44.786 { 00:28:44.787 "name": "nvme0", 00:28:44.787 "trtype": "tcp", 00:28:44.787 "traddr": "10.0.0.1", 00:28:44.787 "adrfam": "ipv4", 00:28:44.787 "trsvcid": "4420", 00:28:44.787 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:44.787 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:44.787 "prchk_reftag": false, 00:28:44.787 "prchk_guard": false, 00:28:44.787 "hdgst": false, 00:28:44.787 "ddgst": false, 00:28:44.787 "method": "bdev_nvme_attach_controller", 00:28:44.787 "req_id": 1 00:28:44.787 } 00:28:44.787 Got JSON-RPC error response 00:28:44.787 response: 00:28:44.787 { 00:28:44.787 "code": -5, 00:28:44.787 "message": "Input/output error" 00:28:44.787 } 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.787 request: 00:28:44.787 { 00:28:44.787 "name": "nvme0", 00:28:44.787 "trtype": "tcp", 00:28:44.787 "traddr": "10.0.0.1", 00:28:44.787 "adrfam": "ipv4", 00:28:44.787 "trsvcid": "4420", 00:28:44.787 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:44.787 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:44.787 "prchk_reftag": false, 00:28:44.787 "prchk_guard": false, 00:28:44.787 "hdgst": false, 00:28:44.787 "ddgst": false, 00:28:44.787 "dhchap_key": "key2", 00:28:44.787 "method": "bdev_nvme_attach_controller", 00:28:44.787 "req_id": 1 00:28:44.787 } 00:28:44.787 Got JSON-RPC error response 00:28:44.787 response: 00:28:44.787 { 00:28:44.787 "code": -5, 00:28:44.787 "message": "Input/output error" 00:28:44.787 } 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:44.787 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:45.046 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.047 request: 00:28:45.047 { 00:28:45.047 "name": "nvme0", 00:28:45.047 "trtype": "tcp", 00:28:45.047 "traddr": "10.0.0.1", 00:28:45.047 "adrfam": "ipv4", 00:28:45.047 "trsvcid": "4420", 00:28:45.047 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:45.047 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:45.047 "prchk_reftag": false, 00:28:45.047 "prchk_guard": false, 00:28:45.047 "hdgst": false, 00:28:45.047 "ddgst": false, 00:28:45.047 "dhchap_key": "key1", 00:28:45.047 "dhchap_ctrlr_key": "ckey2", 00:28:45.047 "method": "bdev_nvme_attach_controller", 00:28:45.047 "req_id": 1 00:28:45.047 } 00:28:45.047 Got JSON-RPC error response 00:28:45.047 response: 00:28:45.047 { 00:28:45.047 "code": -5, 00:28:45.047 "message": "Input/output error" 00:28:45.047 } 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:45.047 rmmod nvme_tcp 00:28:45.047 rmmod nvme_fabrics 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2658620 ']' 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2658620 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2658620 ']' 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2658620 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:45.047 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2658620 00:28:45.047 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:45.047 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:45.047 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2658620' 00:28:45.047 killing process with pid 2658620 00:28:45.047 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2658620 00:28:45.047 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2658620 00:28:45.306 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:45.306 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:45.306 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:45.306 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.306 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:45.306 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.306 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.306 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.842 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:47.842 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:47.842 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:47.842 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:47.842 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:47.842 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:47.842 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:47.843 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:47.843 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:47.843 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:47.843 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:47.843 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:47.843 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:50.377 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:50.377 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:51.335 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:28:51.335 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.QR3 /tmp/spdk.key-null.Gid /tmp/spdk.key-sha256.CXh /tmp/spdk.key-sha384.9XC /tmp/spdk.key-sha512.joQ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:51.335 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:54.623 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:54.623 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:54.623 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:54.623 00:28:54.623 real 0m56.124s 00:28:54.623 user 0m50.624s 00:28:54.623 sys 0m12.798s 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.623 ************************************ 00:28:54.623 END TEST nvmf_auth_host 00:28:54.623 ************************************ 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.623 ************************************ 00:28:54.623 START TEST nvmf_digest 00:28:54.623 ************************************ 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:54.623 * Looking for test storage... 00:28:54.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.623 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:54.624 19:05:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.196 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.196 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:01.196 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:01.196 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:01.196 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:01.196 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:01.196 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:01.196 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:01.196 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:01.197 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:01.197 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:01.197 Found net devices under 0000:af:00.0: cvl_0_0 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:01.197 Found net devices under 0000:af:00.1: cvl_0_1 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.197 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:01.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:29:01.197 00:29:01.197 --- 10.0.0.2 ping statistics --- 00:29:01.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.198 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:01.198 00:29:01.198 --- 10.0.0.1 ping statistics --- 00:29:01.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.198 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.198 ************************************ 00:29:01.198 START TEST nvmf_digest_clean 00:29:01.198 ************************************ 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2673565 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2673565 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2673565 ']' 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.198 19:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.198 [2024-07-24 19:05:45.456831] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:01.198 [2024-07-24 19:05:45.456902] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.198 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.198 [2024-07-24 19:05:45.552796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.198 [2024-07-24 19:05:45.638197] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.198 [2024-07-24 19:05:45.638242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.198 [2024-07-24 19:05:45.638251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.198 [2024-07-24 19:05:45.638260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.198 [2024-07-24 19:05:45.638267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.198 [2024-07-24 19:05:45.638290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.457 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.715 null0 00:29:01.715 [2024-07-24 19:05:46.532578] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.715 [2024-07-24 19:05:46.556793] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2673841 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2673841 /var/tmp/bperf.sock 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:01.715 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2673841 ']' 00:29:01.716 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:01.716 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.716 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:01.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:01.716 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.716 19:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.716 [2024-07-24 19:05:46.641946] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:01.716 [2024-07-24 19:05:46.642055] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673841 ] 00:29:01.716 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.975 [2024-07-24 19:05:46.758966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.975 [2024-07-24 19:05:46.860132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.911 19:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.911 19:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:02.911 19:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:02.911 19:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:02.911 19:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:03.479 19:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.479 19:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.047 nvme0n1 00:29:04.048 19:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:04.048 19:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:04.307 Running I/O for 2 seconds... 00:29:06.841 00:29:06.841 Latency(us) 00:29:06.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.841 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:06.841 nvme0n1 : 2.00 13960.95 54.53 0.00 0.00 9157.23 5153.51 20375.74 00:29:06.841 =================================================================================================================== 00:29:06.841 Total : 13960.95 54.53 0.00 0.00 9157.23 5153.51 20375.74 00:29:06.841 0 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:06.841 | select(.opcode=="crc32c") 00:29:06.841 | "\(.module_name) \(.executed)"' 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2673841 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2673841 ']' 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2673841 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2673841 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2673841' 00:29:06.841 killing process with pid 2673841 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2673841 00:29:06.841 Received shutdown signal, test time was about 2.000000 seconds 00:29:06.841 00:29:06.841 Latency(us) 00:29:06.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.841 =================================================================================================================== 00:29:06.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2673841 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2674649 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2674649 /var/tmp/bperf.sock 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2674649 ']' 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:06.841 19:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.099 [2024-07-24 19:05:51.855134] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:07.099 [2024-07-24 19:05:51.855200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674649 ] 00:29:07.099 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:07.099 Zero copy mechanism will not be used. 00:29:07.099 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.099 [2024-07-24 19:05:51.937874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.099 [2024-07-24 19:05:52.044144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.356 19:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.357 19:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:07.357 19:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:07.357 19:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:07.357 19:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:07.925 19:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.925 19:05:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.183 nvme0n1 00:29:08.183 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:08.183 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.442 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.442 Zero copy mechanism will not be used. 00:29:08.442 Running I/O for 2 seconds... 00:29:10.342 00:29:10.343 Latency(us) 00:29:10.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.343 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:10.343 nvme0n1 : 2.00 3566.15 445.77 0.00 0.00 4480.54 1236.25 8281.37 00:29:10.343 =================================================================================================================== 00:29:10.343 Total : 3566.15 445.77 0.00 0.00 4480.54 1236.25 8281.37 00:29:10.343 0 00:29:10.343 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:10.343 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:10.343 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:10.343 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:10.343 | select(.opcode=="crc32c") 00:29:10.343 | "\(.module_name) \(.executed)"' 00:29:10.343 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2674649 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2674649 ']' 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2674649 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2674649 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2674649' 00:29:10.601 killing process with pid 2674649 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2674649 00:29:10.601 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.601 00:29:10.601 Latency(us) 00:29:10.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.601 =================================================================================================================== 00:29:10.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.601 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2674649 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2675434 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2675434 /var/tmp/bperf.sock 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2675434 ']' 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.860 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:10.861 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:11.120 [2024-07-24 19:05:55.871206] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:11.120 [2024-07-24 19:05:55.871272] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675434 ] 00:29:11.120 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.120 [2024-07-24 19:05:55.953035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.120 [2024-07-24 19:05:56.049121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.120 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:11.120 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:11.120 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:11.120 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:11.120 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:11.687 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.687 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.947 nvme0n1 00:29:11.947 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:11.947 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.947 Running I/O for 2 seconds... 00:29:13.890 00:29:13.890 Latency(us) 00:29:13.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.890 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.890 nvme0n1 : 2.01 17942.74 70.09 0.00 0.00 7124.18 3589.59 14954.12 00:29:13.890 =================================================================================================================== 00:29:13.890 Total : 17942.74 70.09 0.00 0.00 7124.18 3589.59 14954.12 00:29:13.890 0 00:29:14.148 19:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:14.148 19:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:14.148 19:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:14.148 19:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:14.148 | select(.opcode=="crc32c") 00:29:14.148 | "\(.module_name) \(.executed)"' 00:29:14.148 19:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2675434 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2675434 ']' 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2675434 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2675434 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2675434' 00:29:14.407 killing process with pid 2675434 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2675434 00:29:14.407 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.407 00:29:14.407 Latency(us) 00:29:14.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.407 =================================================================================================================== 00:29:14.407 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.407 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2675434 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2675980 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2675980 /var/tmp/bperf.sock 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2675980 ']' 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.666 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.666 [2024-07-24 19:05:59.480035] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:14.666 [2024-07-24 19:05:59.480083] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675980 ] 00:29:14.666 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:14.666 Zero copy mechanism will not be used. 00:29:14.666 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.666 [2024-07-24 19:05:59.551469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.666 [2024-07-24 19:05:59.658230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.234 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.234 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:15.234 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:15.234 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:15.234 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:15.491 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.491 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.750 nvme0n1 00:29:15.750 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:15.750 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.750 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:15.750 Zero copy mechanism will not be used. 00:29:15.750 Running I/O for 2 seconds... 00:29:18.284 00:29:18.284 Latency(us) 00:29:18.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.284 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:18.284 nvme0n1 : 2.00 4627.88 578.48 0.00 0.00 3448.90 2204.39 7477.06 00:29:18.284 =================================================================================================================== 00:29:18.284 Total : 4627.88 578.48 0.00 0.00 3448.90 2204.39 7477.06 00:29:18.284 0 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:18.284 | select(.opcode=="crc32c") 00:29:18.284 | "\(.module_name) \(.executed)"' 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2675980 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2675980 ']' 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2675980 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:18.284 19:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2675980 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2675980' 00:29:18.285 killing process with pid 2675980 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2675980 00:29:18.285 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.285 00:29:18.285 Latency(us) 00:29:18.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.285 =================================================================================================================== 00:29:18.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2675980 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2673565 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2673565 ']' 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2673565 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:18.285 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2673565 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2673565' 00:29:18.544 killing process with pid 2673565 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2673565 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2673565 00:29:18.544 00:29:18.544 real 0m18.112s 00:29:18.544 user 0m37.238s 00:29:18.544 sys 0m4.362s 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:18.544 ************************************ 00:29:18.544 END TEST nvmf_digest_clean 00:29:18.544 ************************************ 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:18.544 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:18.803 ************************************ 00:29:18.803 START TEST nvmf_digest_error 00:29:18.803 ************************************ 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2676924 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2676924 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2676924 ']' 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:18.803 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.803 [2024-07-24 19:06:03.637324] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:18.803 [2024-07-24 19:06:03.637377] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.803 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.803 [2024-07-24 19:06:03.723464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.062 [2024-07-24 19:06:03.812156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.062 [2024-07-24 19:06:03.812197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.062 [2024-07-24 19:06:03.812208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.062 [2024-07-24 19:06:03.812216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.062 [2024-07-24 19:06:03.812223] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.062 [2024-07-24 19:06:03.812246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.998 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.999 [2024-07-24 19:06:04.871306] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.999 null0 00:29:19.999 [2024-07-24 19:06:04.971556] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.999 [2024-07-24 19:06:04.995759] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.999 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2677200 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2677200 /var/tmp/bperf.sock 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2677200 ']' 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.999 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.258 [2024-07-24 19:06:05.050945] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:20.258 [2024-07-24 19:06:05.051003] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677200 ] 00:29:20.258 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.258 [2024-07-24 19:06:05.133698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.258 [2024-07-24 19:06:05.241075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.517 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.517 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:20.517 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.517 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.775 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:20.775 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.775 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.775 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.775 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.775 19:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.343 nvme0n1 00:29:21.343 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:21.343 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.343 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:21.343 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.343 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:21.343 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.343 Running I/O for 2 seconds... 00:29:21.343 [2024-07-24 19:06:06.273594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.343 [2024-07-24 19:06:06.273652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.343 [2024-07-24 19:06:06.273671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.343 [2024-07-24 19:06:06.289611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.343 [2024-07-24 19:06:06.289650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.343 [2024-07-24 19:06:06.289667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.343 [2024-07-24 19:06:06.311533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.343 [2024-07-24 19:06:06.311570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.343 [2024-07-24 19:06:06.311586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.343 [2024-07-24 19:06:06.331820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.343 [2024-07-24 19:06:06.331855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.343 [2024-07-24 19:06:06.331871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.343 [2024-07-24 19:06:06.348360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.343 [2024-07-24 19:06:06.348395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.343 [2024-07-24 19:06:06.348410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.602 [2024-07-24 19:06:06.367705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.602 [2024-07-24 19:06:06.367740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.602 [2024-07-24 19:06:06.367756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.602 [2024-07-24 19:06:06.384373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.602 [2024-07-24 19:06:06.384407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.602 [2024-07-24 19:06:06.384422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.602 [2024-07-24 19:06:06.403664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.602 [2024-07-24 19:06:06.403699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.602 [2024-07-24 19:06:06.403714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.602 [2024-07-24 19:06:06.421814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.602 [2024-07-24 19:06:06.421848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.602 [2024-07-24 19:06:06.421864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.602 [2024-07-24 19:06:06.437575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.602 [2024-07-24 19:06:06.437613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.602 [2024-07-24 19:06:06.437629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.602 [2024-07-24 19:06:06.459610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.602 [2024-07-24 19:06:06.459644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.602 [2024-07-24 19:06:06.459660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.602 [2024-07-24 19:06:06.481781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.602 [2024-07-24 19:06:06.481814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.602 [2024-07-24 19:06:06.481835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.602 [2024-07-24 19:06:06.502699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.602 [2024-07-24 19:06:06.502733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.602 [2024-07-24 19:06:06.502749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.602 [2024-07-24 19:06:06.519503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.602 [2024-07-24 19:06:06.519536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.602 [2024-07-24 19:06:06.519551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.602 [2024-07-24 19:06:06.539861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.602 [2024-07-24 19:06:06.539895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.603 [2024-07-24 19:06:06.539910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.603 [2024-07-24 19:06:06.554719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.603 [2024-07-24 19:06:06.554752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.603 [2024-07-24 19:06:06.554767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.603 [2024-07-24 19:06:06.575988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.603 [2024-07-24 19:06:06.576021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.603 [2024-07-24 19:06:06.576036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.603 [2024-07-24 19:06:06.594977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.603 [2024-07-24 19:06:06.595012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.603 [2024-07-24 19:06:06.595027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.610890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.862 [2024-07-24 19:06:06.610924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.862 [2024-07-24 19:06:06.610938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.631870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.862 [2024-07-24 19:06:06.631904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.862 [2024-07-24 19:06:06.631919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.653211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.862 [2024-07-24 19:06:06.653250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.862 [2024-07-24 19:06:06.653266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.669578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.862 [2024-07-24 19:06:06.669621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.862 [2024-07-24 19:06:06.669637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.687621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.862 [2024-07-24 19:06:06.687654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.862 [2024-07-24 19:06:06.687670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.702639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.862 [2024-07-24 19:06:06.702671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.862 [2024-07-24 19:06:06.702685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.723972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.862 [2024-07-24 19:06:06.724005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.862 [2024-07-24 19:06:06.724020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.738663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.862 [2024-07-24 19:06:06.738696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.862 [2024-07-24 19:06:06.738710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.759457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.862 [2024-07-24 19:06:06.759490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.862 [2024-07-24 19:06:06.759504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.781917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.862 [2024-07-24 19:06:06.781951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.862 [2024-07-24 19:06:06.781966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.862 [2024-07-24 19:06:06.797510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.863 [2024-07-24 19:06:06.797543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.863 [2024-07-24 19:06:06.797558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.863 [2024-07-24 19:06:06.818023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.863 [2024-07-24 19:06:06.818058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.863 [2024-07-24 19:06:06.818073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.863 [2024-07-24 19:06:06.838887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.863 [2024-07-24 19:06:06.838920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.863 [2024-07-24 19:06:06.838936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.863 [2024-07-24 19:06:06.855629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:21.863 [2024-07-24 19:06:06.855662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.863 [2024-07-24 19:06:06.855676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:06.874015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:06.874050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:06.874065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:06.887107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:06.887140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:06.887155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:06.904101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:06.904134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:06.904150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:06.918278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:06.918312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:06.918327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:06.938007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:06.938040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:06.938055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:06.960309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:06.960344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:06.960365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:06.980935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:06.980969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:06.980984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:07.003832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:07.003866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:07.003881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:07.026475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:07.026510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:07.026525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:07.040574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:07.040615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:07.040631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:07.060932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:07.060967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:07.060983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:07.079283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:07.079320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:07.079336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:07.095170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:07.095203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:07.095219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:07.112447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:07.112480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:07.112495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.121 [2024-07-24 19:06:07.127952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.121 [2024-07-24 19:06:07.127991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.121 [2024-07-24 19:06:07.128006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.148492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.148526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.148541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.163658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.163692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.163707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.182149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.182183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.182198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.204741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.204775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.204791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.223790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.223823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.223838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.241127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.241160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.241174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.261726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.261760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.261775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.281238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.281273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.281294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.296818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.296852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.296868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.315797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.315831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.315846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.380 [2024-07-24 19:06:07.335197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.380 [2024-07-24 19:06:07.335229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.380 [2024-07-24 19:06:07.335244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.381 [2024-07-24 19:06:07.348867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.381 [2024-07-24 19:06:07.348899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.381 [2024-07-24 19:06:07.348914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.381 [2024-07-24 19:06:07.369113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.381 [2024-07-24 19:06:07.369147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.381 [2024-07-24 19:06:07.369162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.639 [2024-07-24 19:06:07.390126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.639 [2024-07-24 19:06:07.390159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.639 [2024-07-24 19:06:07.390175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.639 [2024-07-24 19:06:07.406663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.639 [2024-07-24 19:06:07.406696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.639 [2024-07-24 19:06:07.406712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.639 [2024-07-24 19:06:07.426244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.639 [2024-07-24 19:06:07.426278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.639 [2024-07-24 19:06:07.426293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.639 [2024-07-24 19:06:07.441561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.441600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.441622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.464809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.464843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.464858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.479547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.479580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.479595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.498893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.498927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.498943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.513820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.513853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.513868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.531150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.531184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.531199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.546663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.546696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.546712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.565849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.565882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.565897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.588146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.588179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.588195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.610074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.610108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.610123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.625907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.625939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.625956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.640 [2024-07-24 19:06:07.646576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.640 [2024-07-24 19:06:07.646617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.640 [2024-07-24 19:06:07.646633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.667387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.667420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.667434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.685079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.685112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.685128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.705031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.705064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.705079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.725337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.725371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.725386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.741350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.741384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.741399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.763900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.763933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.763954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.785853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.785887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.785902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.806424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.806458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.806474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.827807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.827840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.827855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.843070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.843104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.843118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.859429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.859464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.859479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.875282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.875316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.875330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.899 [2024-07-24 19:06:07.896315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:22.899 [2024-07-24 19:06:07.896349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.899 [2024-07-24 19:06:07.896364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.158 [2024-07-24 19:06:07.911787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.158 [2024-07-24 19:06:07.911820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.158 [2024-07-24 19:06:07.911836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.158 [2024-07-24 19:06:07.927127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.158 [2024-07-24 19:06:07.927166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.158 [2024-07-24 19:06:07.927181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.158 [2024-07-24 19:06:07.944871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.158 [2024-07-24 19:06:07.944905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.158 [2024-07-24 19:06:07.944920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.158 [2024-07-24 19:06:07.963005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.158 [2024-07-24 19:06:07.963042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.158 [2024-07-24 19:06:07.963058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.158 [2024-07-24 19:06:07.978830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.158 [2024-07-24 19:06:07.978863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.158 [2024-07-24 19:06:07.978879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.158 [2024-07-24 19:06:07.994094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.158 [2024-07-24 19:06:07.994127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.158 [2024-07-24 19:06:07.994142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.158 [2024-07-24 19:06:08.009448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.158 [2024-07-24 19:06:08.009481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.158 [2024-07-24 19:06:08.009497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.158 [2024-07-24 19:06:08.024632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.158 [2024-07-24 19:06:08.024665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.158 [2024-07-24 19:06:08.024679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.158 [2024-07-24 19:06:08.040115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.158 [2024-07-24 19:06:08.040148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.159 [2024-07-24 19:06:08.040163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.159 [2024-07-24 19:06:08.056141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.159 [2024-07-24 19:06:08.056174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.159 [2024-07-24 19:06:08.056189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.159 [2024-07-24 19:06:08.076600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.159 [2024-07-24 19:06:08.076640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.159 [2024-07-24 19:06:08.076655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.159 [2024-07-24 19:06:08.096840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.159 [2024-07-24 19:06:08.096874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.159 [2024-07-24 19:06:08.096890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.159 [2024-07-24 19:06:08.119773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.159 [2024-07-24 19:06:08.119807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.159 [2024-07-24 19:06:08.119824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.159 [2024-07-24 19:06:08.139270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.159 [2024-07-24 19:06:08.139303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.159 [2024-07-24 19:06:08.139319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.159 [2024-07-24 19:06:08.155031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.159 [2024-07-24 19:06:08.155064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.159 [2024-07-24 19:06:08.155079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.418 [2024-07-24 19:06:08.176389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.418 [2024-07-24 19:06:08.176423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.418 [2024-07-24 19:06:08.176437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.418 [2024-07-24 19:06:08.195407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.418 [2024-07-24 19:06:08.195440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.418 [2024-07-24 19:06:08.195455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.418 [2024-07-24 19:06:08.211984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.418 [2024-07-24 19:06:08.212016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.418 [2024-07-24 19:06:08.212032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.418 [2024-07-24 19:06:08.227673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.418 [2024-07-24 19:06:08.227709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.418 [2024-07-24 19:06:08.227730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.418 [2024-07-24 19:06:08.248911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd019f0) 00:29:23.418 [2024-07-24 19:06:08.248945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.419 [2024-07-24 19:06:08.248962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.419 00:29:23.419 Latency(us) 00:29:23.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.419 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:23.419 nvme0n1 : 2.01 13789.99 53.87 0.00 0.00 9264.00 5362.04 34078.72 00:29:23.419 =================================================================================================================== 00:29:23.419 Total : 13789.99 53.87 0.00 0.00 9264.00 5362.04 34078.72 00:29:23.419 0 00:29:23.419 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:23.419 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:23.419 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:23.419 | .driver_specific 00:29:23.419 | .nvme_error 00:29:23.419 | .status_code 00:29:23.419 | .command_transient_transport_error' 00:29:23.419 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:23.677 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 108 > 0 )) 00:29:23.677 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2677200 00:29:23.677 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2677200 ']' 00:29:23.678 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2677200 00:29:23.678 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:23.678 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:23.678 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2677200 00:29:23.678 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:23.678 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:23.678 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2677200' 00:29:23.678 killing process with pid 2677200 00:29:23.678 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2677200 00:29:23.678 Received shutdown signal, test time was about 2.000000 seconds 00:29:23.678 00:29:23.678 Latency(us) 00:29:23.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.678 =================================================================================================================== 00:29:23.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.678 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2677200 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2678044 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2678044 /var/tmp/bperf.sock 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2678044 ']' 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:23.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.936 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:23.936 [2024-07-24 19:06:08.855685] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:23.936 [2024-07-24 19:06:08.855746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678044 ] 00:29:23.936 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:23.936 Zero copy mechanism will not be used. 00:29:23.936 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.936 [2024-07-24 19:06:08.937162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.195 [2024-07-24 19:06:09.042118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.128 19:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:25.128 19:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:25.128 19:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.128 19:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.128 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:25.128 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.128 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.128 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.128 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.128 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.385 nvme0n1 00:29:25.385 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:25.385 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.385 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.385 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.385 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:25.385 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:25.643 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:25.643 Zero copy mechanism will not be used. 00:29:25.643 Running I/O for 2 seconds... 00:29:25.643 [2024-07-24 19:06:10.512974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.513023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.513043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.523387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.523425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.523441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.533228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.533263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.533279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.542567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.542600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.542625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.551500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.551533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.551548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.561766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.561798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.561813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.572194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.572226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.572241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.582416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.582449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.582463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.592949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.592981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.592996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.602808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.602842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.602857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.612681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.612714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.612730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.623023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.623055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.623070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.633145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.633179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.633193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.643 [2024-07-24 19:06:10.643418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.643 [2024-07-24 19:06:10.643451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.643 [2024-07-24 19:06:10.643465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.654027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.654063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.654079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.663758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.663791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.663811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.673706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.673741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.673758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.683621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.683654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.683669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.693711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.693748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.693764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.704126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.704160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.704176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.714844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.714878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.714894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.725085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.725119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.725133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.735816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.735850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.735865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.746358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.746392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.746407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.755996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.756035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.756050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.765256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.765289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.765304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.774256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.774289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.774304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.783994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.784029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.784044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.793616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.793649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.793667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.803007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.803040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.803055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.812199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.812232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.902 [2024-07-24 19:06:10.812247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.902 [2024-07-24 19:06:10.820576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.902 [2024-07-24 19:06:10.820759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.903 [2024-07-24 19:06:10.820777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.903 [2024-07-24 19:06:10.830260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.903 [2024-07-24 19:06:10.830295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.903 [2024-07-24 19:06:10.830312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.903 [2024-07-24 19:06:10.839865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.903 [2024-07-24 19:06:10.839901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.903 [2024-07-24 19:06:10.839916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.903 [2024-07-24 19:06:10.849205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.903 [2024-07-24 19:06:10.849241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.903 [2024-07-24 19:06:10.849257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.903 [2024-07-24 19:06:10.858662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.903 [2024-07-24 19:06:10.858697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.903 [2024-07-24 19:06:10.858713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.903 [2024-07-24 19:06:10.867810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.903 [2024-07-24 19:06:10.867844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.903 [2024-07-24 19:06:10.867860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.903 [2024-07-24 19:06:10.877493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.903 [2024-07-24 19:06:10.877528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.903 [2024-07-24 19:06:10.877543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.903 [2024-07-24 19:06:10.886940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.903 [2024-07-24 19:06:10.886974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.903 [2024-07-24 19:06:10.886991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.903 [2024-07-24 19:06:10.896249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.903 [2024-07-24 19:06:10.896281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.903 [2024-07-24 19:06:10.896296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.903 [2024-07-24 19:06:10.905331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:25.903 [2024-07-24 19:06:10.905365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.903 [2024-07-24 19:06:10.905380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.163 [2024-07-24 19:06:10.914103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.163 [2024-07-24 19:06:10.914142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.163 [2024-07-24 19:06:10.914157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.163 [2024-07-24 19:06:10.922671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.163 [2024-07-24 19:06:10.922706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.163 [2024-07-24 19:06:10.922722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.163 [2024-07-24 19:06:10.931126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.163 [2024-07-24 19:06:10.931159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.163 [2024-07-24 19:06:10.931175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.163 [2024-07-24 19:06:10.939512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.163 [2024-07-24 19:06:10.939545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.163 [2024-07-24 19:06:10.939560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.163 [2024-07-24 19:06:10.947809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:10.947840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:10.947855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:10.955940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:10.955972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:10.955986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:10.964358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:10.964390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:10.964405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:10.972619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:10.972652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:10.972666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:10.981374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:10.981407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:10.981422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:10.990045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:10.990077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:10.990092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:10.998416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:10.998448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:10.998463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.006819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.006852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.006867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.015588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.015630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.015645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.024028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.024062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.024077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.032474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.032507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.032522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.040897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.040930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.040944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.049197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.049230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.049245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.057666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.057698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.057718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.066436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.066470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.066484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.074987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.075020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.075034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.083216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.083249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.083263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.091654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.091687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.091701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.100033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.100065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.164 [2024-07-24 19:06:11.100079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.164 [2024-07-24 19:06:11.108522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.164 [2024-07-24 19:06:11.108554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.165 [2024-07-24 19:06:11.108569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.165 [2024-07-24 19:06:11.117243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.165 [2024-07-24 19:06:11.117277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.165 [2024-07-24 19:06:11.117293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.165 [2024-07-24 19:06:11.125711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.165 [2024-07-24 19:06:11.125744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.165 [2024-07-24 19:06:11.125759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.165 [2024-07-24 19:06:11.134005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.165 [2024-07-24 19:06:11.134044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.165 [2024-07-24 19:06:11.134059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.165 [2024-07-24 19:06:11.142677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.165 [2024-07-24 19:06:11.142709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.165 [2024-07-24 19:06:11.142724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.165 [2024-07-24 19:06:11.151271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.165 [2024-07-24 19:06:11.151304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.165 [2024-07-24 19:06:11.151319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.165 [2024-07-24 19:06:11.159811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.165 [2024-07-24 19:06:11.159844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.165 [2024-07-24 19:06:11.159858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.165 [2024-07-24 19:06:11.168497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.165 [2024-07-24 19:06:11.168529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.165 [2024-07-24 19:06:11.168544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.177037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.177070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.177085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.185293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.185325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.185340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.194033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.194066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.194081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.202736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.202768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.202788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.211084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.211118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.211133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.219309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.219340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.219355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.227804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.227836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.227850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.235972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.236005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.236020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.244587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.244628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.244644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.253209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.253240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.253255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.261672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.261704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.261719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.270113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.270145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.270160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.278614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.278654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.278669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.286800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.286832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.286846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.295420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.295452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.295466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.303981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.304013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.304027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.429 [2024-07-24 19:06:11.312477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.429 [2024-07-24 19:06:11.312508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.429 [2024-07-24 19:06:11.312524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.320914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.320946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.320961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.329327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.329358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.329374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.337339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.337372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.337387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.345706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.345739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.345754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.354187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.354219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.354234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.362595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.362636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.362651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.371310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.371342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.371357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.379767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.379798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.379813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.388225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.388257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.388272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.396864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.396896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.396911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.405425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.405457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.405471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.414020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.414052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.414067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.422767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.422798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.422818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.430 [2024-07-24 19:06:11.431484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.430 [2024-07-24 19:06:11.431516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.430 [2024-07-24 19:06:11.431531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.440054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.440086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.440101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.448814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.448846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.448861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.457532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.457564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.457579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.466095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.466127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.466142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.474590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.474630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.474645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.483301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.483333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.483347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.491753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.491785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.491800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.500184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.500217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.500231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.508662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.508693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.508708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.517003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.517036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.517051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.525473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.525504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.525519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.534176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.534208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.689 [2024-07-24 19:06:11.534223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.689 [2024-07-24 19:06:11.542663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.689 [2024-07-24 19:06:11.542695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.542710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.550945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.550978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.550993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.559298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.559331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.559347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.567645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.567678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.567697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.575905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.575936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.575952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.584307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.584339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.584354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.592515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.592548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.592562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.601139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.601172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.601187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.609720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.609753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.609768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.618130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.618164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.618179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.626475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.626507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.626522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.635050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.635082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.635097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.643290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.643326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.643342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.652035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.652067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.652082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.660721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.660754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.660769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.669114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.669146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.669162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.677621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.677653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.677667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.686488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.686520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.686535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.690 [2024-07-24 19:06:11.694996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.690 [2024-07-24 19:06:11.695028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.690 [2024-07-24 19:06:11.695044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.949 [2024-07-24 19:06:11.703263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.949 [2024-07-24 19:06:11.703296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.949 [2024-07-24 19:06:11.703311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.949 [2024-07-24 19:06:11.711826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.949 [2024-07-24 19:06:11.711859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.949 [2024-07-24 19:06:11.711874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.949 [2024-07-24 19:06:11.720270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.949 [2024-07-24 19:06:11.720302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.949 [2024-07-24 19:06:11.720317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.949 [2024-07-24 19:06:11.728724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.949 [2024-07-24 19:06:11.728756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.949 [2024-07-24 19:06:11.728770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.949 [2024-07-24 19:06:11.737614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.949 [2024-07-24 19:06:11.737646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.949 [2024-07-24 19:06:11.737661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.949 [2024-07-24 19:06:11.746232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.949 [2024-07-24 19:06:11.746264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.949 [2024-07-24 19:06:11.746279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.949 [2024-07-24 19:06:11.754497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.949 [2024-07-24 19:06:11.754530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.949 [2024-07-24 19:06:11.754546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.763115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.763148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.763163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.771713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.771746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.771760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.780104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.780136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.780150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.788546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.788578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.788599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.796984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.797018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.797033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.805235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.805268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.805282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.813660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.813691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.813706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.822070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.822102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.822117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.830571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.830745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.830764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.839249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.839282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.839299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.847738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.847770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.847786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.855983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.856016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.856031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.864309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.864341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.864356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.873057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.873091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.873107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.881574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.881618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.881634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.890395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.890428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.890443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.898968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.899000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.899015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.907268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.907300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.907315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.915688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.915721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.915736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.924112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.924145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.924160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.932512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.932545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.932564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.940974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.941007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.941022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.950 [2024-07-24 19:06:11.949179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:26.950 [2024-07-24 19:06:11.949213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.950 [2024-07-24 19:06:11.949228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.209 [2024-07-24 19:06:11.957839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.209 [2024-07-24 19:06:11.957872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.209 [2024-07-24 19:06:11.957887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.209 [2024-07-24 19:06:11.966444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.209 [2024-07-24 19:06:11.966478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.209 [2024-07-24 19:06:11.966492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:11.974964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:11.974996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:11.975011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:11.983563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:11.983595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:11.983618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:11.992619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:11.992651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:11.992666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.002347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.002382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.002398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.011601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.011651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.011667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.020235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.020270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.020285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.029015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.029049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.029064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.037758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.037791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.037806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.046380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.046414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.046429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.055146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.055179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.055194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.064234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.064271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.064286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.073750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.073785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.073801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.082636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.082671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.082685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.092101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.092137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.092152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.101792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.101827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.101842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.111432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.111467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.111483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.120684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.120720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.120736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.129678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.129714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.129729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.138368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.138402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.138417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.146942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.146976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.146991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.155879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.155913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.155928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.164348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.164382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.164403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.172897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.172931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.172946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.181494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.181528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.181543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.189922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.189956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.189971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.198492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.198527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.210 [2024-07-24 19:06:12.198543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.210 [2024-07-24 19:06:12.207335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.210 [2024-07-24 19:06:12.207369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.211 [2024-07-24 19:06:12.207384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.211 [2024-07-24 19:06:12.215982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.211 [2024-07-24 19:06:12.216016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.211 [2024-07-24 19:06:12.216031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.469 [2024-07-24 19:06:12.224260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.469 [2024-07-24 19:06:12.224294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.469 [2024-07-24 19:06:12.224310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.469 [2024-07-24 19:06:12.232909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.469 [2024-07-24 19:06:12.232943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.469 [2024-07-24 19:06:12.232958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.469 [2024-07-24 19:06:12.241525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.469 [2024-07-24 19:06:12.241563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.469 [2024-07-24 19:06:12.241578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.469 [2024-07-24 19:06:12.250202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.469 [2024-07-24 19:06:12.250235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.469 [2024-07-24 19:06:12.250250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.469 [2024-07-24 19:06:12.258693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.469 [2024-07-24 19:06:12.258727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.469 [2024-07-24 19:06:12.258742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.469 [2024-07-24 19:06:12.267302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.469 [2024-07-24 19:06:12.267335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.469 [2024-07-24 19:06:12.267349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.469 [2024-07-24 19:06:12.275685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.469 [2024-07-24 19:06:12.275720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.469 [2024-07-24 19:06:12.275736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.469 [2024-07-24 19:06:12.284427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.469 [2024-07-24 19:06:12.284462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.284476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.293117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.293152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.293166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.301595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.301637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.301652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.310054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.310088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.310108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.318824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.318858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.318873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.327325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.327360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.327374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.335917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.335951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.335965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.344716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.344749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.344763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.353379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.353414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.353428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.361911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.361945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.361960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.370473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.370506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.370521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.379084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.379119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.379135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.387474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.387516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.387531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.396306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.396341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.396356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.405003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.405036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.405051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.413429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.413463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.413478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.421987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.422021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.422035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.430780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.430814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.430828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.439310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.439343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.439358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.447678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.447711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.447726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.456346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.456380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.456394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.465188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.465221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.465235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.470 [2024-07-24 19:06:12.473776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.470 [2024-07-24 19:06:12.473808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.470 [2024-07-24 19:06:12.473823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.729 [2024-07-24 19:06:12.482401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.729 [2024-07-24 19:06:12.482433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.729 [2024-07-24 19:06:12.482448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.729 [2024-07-24 19:06:12.491037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.729 [2024-07-24 19:06:12.491070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.729 [2024-07-24 19:06:12.491084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.729 [2024-07-24 19:06:12.499370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a75810) 00:29:27.729 [2024-07-24 19:06:12.499403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.729 [2024-07-24 19:06:12.499417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.729 00:29:27.729 Latency(us) 00:29:27.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.729 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:27.729 nvme0n1 : 2.00 3527.12 440.89 0.00 0.00 4530.65 1407.53 11021.96 00:29:27.729 =================================================================================================================== 00:29:27.729 Total : 3527.12 440.89 0.00 0.00 4530.65 1407.53 11021.96 00:29:27.729 0 00:29:27.729 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:27.729 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:27.729 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:27.729 | .driver_specific 00:29:27.729 | .nvme_error 00:29:27.729 | .status_code 00:29:27.729 | .command_transient_transport_error' 00:29:27.729 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 227 > 0 )) 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2678044 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2678044 ']' 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2678044 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2678044 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2678044' 00:29:27.987 killing process with pid 2678044 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2678044 00:29:27.987 Received shutdown signal, test time was about 2.000000 seconds 00:29:27.987 00:29:27.987 Latency(us) 00:29:27.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.987 =================================================================================================================== 00:29:27.987 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.987 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2678044 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2678925 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2678925 /var/tmp/bperf.sock 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2678925 ']' 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:28.246 19:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.246 [2024-07-24 19:06:13.104191] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:28.246 [2024-07-24 19:06:13.104253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678925 ] 00:29:28.246 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.246 [2024-07-24 19:06:13.187168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.505 [2024-07-24 19:06:13.285743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.070 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:29.070 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:29.070 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:29.070 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:29.327 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:29.327 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.327 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.327 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.327 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.327 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.262 nvme0n1 00:29:30.262 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:30.262 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.262 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.262 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.262 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:30.262 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:30.262 Running I/O for 2 seconds... 00:29:30.262 [2024-07-24 19:06:15.187064] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190edd58 00:29:30.262 [2024-07-24 19:06:15.188422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.262 [2024-07-24 19:06:15.188469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:30.262 [2024-07-24 19:06:15.200723] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fa7d8 00:29:30.262 [2024-07-24 19:06:15.202018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.262 [2024-07-24 19:06:15.202052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:30.262 [2024-07-24 19:06:15.217183] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190feb58 00:29:30.262 [2024-07-24 19:06:15.218599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.262 [2024-07-24 19:06:15.218640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:30.262 [2024-07-24 19:06:15.233639] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eea00 00:29:30.262 [2024-07-24 19:06:15.235938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.262 [2024-07-24 19:06:15.235969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:30.262 [2024-07-24 19:06:15.247436] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f6890 00:29:30.262 [2024-07-24 19:06:15.249028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.262 [2024-07-24 19:06:15.249059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:30.262 [2024-07-24 19:06:15.260742] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e49b0 00:29:30.262 [2024-07-24 19:06:15.262497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.262 [2024-07-24 19:06:15.262528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.278434] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e7818 00:29:30.521 [2024-07-24 19:06:15.280926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.280957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.289034] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1f80 00:29:30.521 [2024-07-24 19:06:15.290058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.290089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.303708] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1f80 00:29:30.521 [2024-07-24 19:06:15.304655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.304685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.318082] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1f80 00:29:30.521 [2024-07-24 19:06:15.319038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.319069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.332449] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1f80 00:29:30.521 [2024-07-24 19:06:15.333401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.333432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.346882] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1f80 00:29:30.521 [2024-07-24 19:06:15.347839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.347869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.361211] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1f80 00:29:30.521 [2024-07-24 19:06:15.362170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.362205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.375584] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1f80 00:29:30.521 [2024-07-24 19:06:15.376537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.376567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.389956] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1f80 00:29:30.521 [2024-07-24 19:06:15.391014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.391044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.404800] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f7538 00:29:30.521 [2024-07-24 19:06:15.406239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.406269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.421634] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e7818 00:29:30.521 [2024-07-24 19:06:15.423657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.423688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.435383] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f6cc8 00:29:30.521 [2024-07-24 19:06:15.436810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.436841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.450092] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ea248 00:29:30.521 [2024-07-24 19:06:15.451926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.451957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.464994] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f3a28 00:29:30.521 [2024-07-24 19:06:15.466611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.466641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.480041] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f0ff8 00:29:30.521 [2024-07-24 19:06:15.481880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.481910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.493703] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f7100 00:29:30.521 [2024-07-24 19:06:15.495495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.495525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.507418] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e4140 00:29:30.521 [2024-07-24 19:06:15.508590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.508627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:30.521 [2024-07-24 19:06:15.522166] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e73e0 00:29:30.521 [2024-07-24 19:06:15.523778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.521 [2024-07-24 19:06:15.523809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:30.780 [2024-07-24 19:06:15.538892] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ff3c8 00:29:30.780 [2024-07-24 19:06:15.541121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.780 [2024-07-24 19:06:15.541152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:30.780 [2024-07-24 19:06:15.552628] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190de038 00:29:30.780 [2024-07-24 19:06:15.554247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.780 [2024-07-24 19:06:15.554277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:30.780 [2024-07-24 19:06:15.565766] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e8d30 00:29:30.780 [2024-07-24 19:06:15.567397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.780 [2024-07-24 19:06:15.567428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:30.780 [2024-07-24 19:06:15.582207] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fb8b8 00:29:30.780 [2024-07-24 19:06:15.583995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.780 [2024-07-24 19:06:15.584025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.596790] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fb8b8 00:29:30.781 [2024-07-24 19:06:15.598567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.598597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.609880] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f2948 00:29:30.781 [2024-07-24 19:06:15.611943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.611975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.623761] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e3060 00:29:30.781 [2024-07-24 19:06:15.624894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.624924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.638860] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e99d8 00:29:30.781 [2024-07-24 19:06:15.640181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.640211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.652529] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f0788 00:29:30.781 [2024-07-24 19:06:15.653849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.653878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.668985] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eaab8 00:29:30.781 [2024-07-24 19:06:15.670531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.670562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.684067] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ea680 00:29:30.781 [2024-07-24 19:06:15.685808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.685837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.698703] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ea680 00:29:30.781 [2024-07-24 19:06:15.700431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.700460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.713077] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ea680 00:29:30.781 [2024-07-24 19:06:15.714840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.714872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.726142] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f4b08 00:29:30.781 [2024-07-24 19:06:15.728152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.728182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.740022] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f6458 00:29:30.781 [2024-07-24 19:06:15.741167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.741203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.754339] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f6458 00:29:30.781 [2024-07-24 19:06:15.755453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.755482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.769239] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ed0b0 00:29:30.781 [2024-07-24 19:06:15.770689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.770720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:30.781 [2024-07-24 19:06:15.784084] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e4140 00:29:30.781 [2024-07-24 19:06:15.785387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.781 [2024-07-24 19:06:15.785418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:31.040 [2024-07-24 19:06:15.798402] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e4140 00:29:31.040 [2024-07-24 19:06:15.799682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.040 [2024-07-24 19:06:15.799712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:31.040 [2024-07-24 19:06:15.812792] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e4140 00:29:31.040 [2024-07-24 19:06:15.814103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.040 [2024-07-24 19:06:15.814134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:31.040 [2024-07-24 19:06:15.827141] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e4140 00:29:31.040 [2024-07-24 19:06:15.828453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.040 [2024-07-24 19:06:15.828483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:31.040 [2024-07-24 19:06:15.842019] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ec408 00:29:31.040 [2024-07-24 19:06:15.843690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.040 [2024-07-24 19:06:15.843720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:31.040 [2024-07-24 19:06:15.856875] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f1868 00:29:31.040 [2024-07-24 19:06:15.858269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.040 [2024-07-24 19:06:15.858299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:31.040 [2024-07-24 19:06:15.871458] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e6fa8 00:29:31.040 [2024-07-24 19:06:15.872869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.040 [2024-07-24 19:06:15.872900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:31.040 [2024-07-24 19:06:15.885829] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1f80 00:29:31.040 [2024-07-24 19:06:15.887305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.040 [2024-07-24 19:06:15.887335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:31.040 [2024-07-24 19:06:15.902063] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fcdd0 00:29:31.040 [2024-07-24 19:06:15.904375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.040 [2024-07-24 19:06:15.904405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:31.040 [2024-07-24 19:06:15.915826] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f7100 00:29:31.040 [2024-07-24 19:06:15.917519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.040 [2024-07-24 19:06:15.917551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:31.040 [2024-07-24 19:06:15.928781] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fc128 00:29:31.040 [2024-07-24 19:06:15.930780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.040 [2024-07-24 19:06:15.930811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:31.041 [2024-07-24 19:06:15.942704] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e3498 00:29:31.041 [2024-07-24 19:06:15.943745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.041 [2024-07-24 19:06:15.943775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:31.041 [2024-07-24 19:06:15.957810] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f1430 00:29:31.041 [2024-07-24 19:06:15.959092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.041 [2024-07-24 19:06:15.959122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:31.041 [2024-07-24 19:06:15.972465] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e73e0 00:29:31.041 [2024-07-24 19:06:15.973719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.041 [2024-07-24 19:06:15.973750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:31.041 [2024-07-24 19:06:15.987614] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e12d8 00:29:31.041 [2024-07-24 19:06:15.989090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.041 [2024-07-24 19:06:15.989119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:31.041 [2024-07-24 19:06:16.002424] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e9e10 00:29:31.041 [2024-07-24 19:06:16.003889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.041 [2024-07-24 19:06:16.003920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:31.041 [2024-07-24 19:06:16.017314] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ef6a8 00:29:31.041 [2024-07-24 19:06:16.019001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.041 [2024-07-24 19:06:16.019033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:31.041 [2024-07-24 19:06:16.030360] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fda78 00:29:31.041 [2024-07-24 19:06:16.032303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.041 [2024-07-24 19:06:16.032334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:31.041 [2024-07-24 19:06:16.044255] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f81e0 00:29:31.041 [2024-07-24 19:06:16.045316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.041 [2024-07-24 19:06:16.045346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.058666] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e38d0 00:29:31.300 [2024-07-24 19:06:16.059694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.059724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.073810] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fe2e8 00:29:31.300 [2024-07-24 19:06:16.075048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.075079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.088422] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f20d8 00:29:31.300 [2024-07-24 19:06:16.089677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.089707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.104568] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e8088 00:29:31.300 [2024-07-24 19:06:16.106651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.106683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.118408] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190dfdc0 00:29:31.300 [2024-07-24 19:06:16.119851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.119888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.132686] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ea248 00:29:31.300 [2024-07-24 19:06:16.134118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.134149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.147057] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f5378 00:29:31.300 [2024-07-24 19:06:16.148507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.148536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.161463] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fc560 00:29:31.300 [2024-07-24 19:06:16.162907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.162937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.176339] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1710 00:29:31.300 [2024-07-24 19:06:16.178230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.178261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.192970] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e38d0 00:29:31.300 [2024-07-24 19:06:16.195446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.195478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.203596] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f0350 00:29:31.300 [2024-07-24 19:06:16.204616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.204647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.218280] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f0350 00:29:31.300 [2024-07-24 19:06:16.219376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.300 [2024-07-24 19:06:16.219407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:31.300 [2024-07-24 19:06:16.233225] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f3e60 00:29:31.300 [2024-07-24 19:06:16.234669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.301 [2024-07-24 19:06:16.234703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:31.301 [2024-07-24 19:06:16.250013] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f81e0 00:29:31.301 [2024-07-24 19:06:16.252033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.301 [2024-07-24 19:06:16.252064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:31.301 [2024-07-24 19:06:16.263743] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e1b48 00:29:31.301 [2024-07-24 19:06:16.265144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.301 [2024-07-24 19:06:16.265174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:31.301 [2024-07-24 19:06:16.278361] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190efae0 00:29:31.301 [2024-07-24 19:06:16.280206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.301 [2024-07-24 19:06:16.280236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:31.301 [2024-07-24 19:06:16.295102] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e7818 00:29:31.301 [2024-07-24 19:06:16.297543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.301 [2024-07-24 19:06:16.297573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.308849] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190efae0 00:29:31.560 [2024-07-24 19:06:16.310662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.310693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.321796] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fc560 00:29:31.560 [2024-07-24 19:06:16.323891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.323923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.335771] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fb480 00:29:31.560 [2024-07-24 19:06:16.336964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.336996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.350852] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190de8a8 00:29:31.560 [2024-07-24 19:06:16.352254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.352284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.365477] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190de8a8 00:29:31.560 [2024-07-24 19:06:16.366873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.366903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.380408] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fef90 00:29:31.560 [2024-07-24 19:06:16.382167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.382198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.395189] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190edd58 00:29:31.560 [2024-07-24 19:06:16.396735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.396765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.410266] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e99d8 00:29:31.560 [2024-07-24 19:06:16.412014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.412044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.424877] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e99d8 00:29:31.560 [2024-07-24 19:06:16.426639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.426670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.441223] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f57b0 00:29:31.560 [2024-07-24 19:06:16.443695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.443726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.453011] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f7da8 00:29:31.560 [2024-07-24 19:06:16.454757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.454787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.466696] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ef6a8 00:29:31.560 [2024-07-24 19:06:16.467717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.467747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.480929] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ef6a8 00:29:31.560 [2024-07-24 19:06:16.481945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.481976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.495328] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ef6a8 00:29:31.560 [2024-07-24 19:06:16.496350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.496385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.509690] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ef6a8 00:29:31.560 [2024-07-24 19:06:16.510726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.510756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:31.560 [2024-07-24 19:06:16.524085] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ef6a8 00:29:31.560 [2024-07-24 19:06:16.525100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.560 [2024-07-24 19:06:16.525131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:31.561 [2024-07-24 19:06:16.540449] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e3060 00:29:31.561 [2024-07-24 19:06:16.542335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.561 [2024-07-24 19:06:16.542368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:31.561 [2024-07-24 19:06:16.552222] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e8088 00:29:31.561 [2024-07-24 19:06:16.553318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.561 [2024-07-24 19:06:16.553348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.568301] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190df988 00:29:31.820 [2024-07-24 19:06:16.569772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.569804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.584208] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ec408 00:29:31.820 [2024-07-24 19:06:16.585970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.585999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.595911] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fbcf0 00:29:31.820 [2024-07-24 19:06:16.596828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.596858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.611788] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ee5c8 00:29:31.820 [2024-07-24 19:06:16.613208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.613238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.628481] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fdeb0 00:29:31.820 [2024-07-24 19:06:16.630332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.630367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.642909] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ec408 00:29:31.820 [2024-07-24 19:06:16.644784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.644815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.656207] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e9168 00:29:31.820 [2024-07-24 19:06:16.658080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.658112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.673804] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fc560 00:29:31.820 [2024-07-24 19:06:16.676490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.676520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.685786] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f4298 00:29:31.820 [2024-07-24 19:06:16.687535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.687564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.700317] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e0a68 00:29:31.820 [2024-07-24 19:06:16.702127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.702157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.717964] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190de038 00:29:31.820 [2024-07-24 19:06:16.720500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.720530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.728557] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190df550 00:29:31.820 [2024-07-24 19:06:16.729666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.729696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.742177] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fe720 00:29:31.820 [2024-07-24 19:06:16.743273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.743303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.758171] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f6458 00:29:31.820 [2024-07-24 19:06:16.759780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.759811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.773024] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f7970 00:29:31.820 [2024-07-24 19:06:16.774295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.774325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.787560] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f7970 00:29:31.820 [2024-07-24 19:06:16.788751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.788782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.801963] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f7970 00:29:31.820 [2024-07-24 19:06:16.803143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.803172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:31.820 [2024-07-24 19:06:16.818092] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f7970 00:29:31.820 [2024-07-24 19:06:16.820179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.820 [2024-07-24 19:06:16.820210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.829908] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fc128 00:29:32.080 [2024-07-24 19:06:16.831150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.831180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.845923] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f5378 00:29:32.080 [2024-07-24 19:06:16.847678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.847709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.861993] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fe2e8 00:29:32.080 [2024-07-24 19:06:16.864028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.864058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.874045] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e4578 00:29:32.080 [2024-07-24 19:06:16.875243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.875273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.890049] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190ea248 00:29:32.080 [2024-07-24 19:06:16.891785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.891815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.906170] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eea00 00:29:32.080 [2024-07-24 19:06:16.908149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.908178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.918011] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e0ea0 00:29:32.080 [2024-07-24 19:06:16.919153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.919184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.934195] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f0788 00:29:32.080 [2024-07-24 19:06:16.935843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.935877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.949771] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f6020 00:29:32.080 [2024-07-24 19:06:16.951224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.951255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.964047] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190e0ea0 00:29:32.080 [2024-07-24 19:06:16.965623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.965653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.977501] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f4f40 00:29:32.080 [2024-07-24 19:06:16.979069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.979099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:16.993886] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190fe720 00:29:32.080 [2024-07-24 19:06:16.995550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:16.995581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:17.009021] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190f6020 00:29:32.080 [2024-07-24 19:06:17.010964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:17.010998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:17.020823] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.080 [2024-07-24 19:06:17.021852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:17.021883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:17.035186] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.080 [2024-07-24 19:06:17.036218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:17.036248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:17.049567] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.080 [2024-07-24 19:06:17.050600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:17.050633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:17.063897] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.080 [2024-07-24 19:06:17.064929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:17.064961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.080 [2024-07-24 19:06:17.078266] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.080 [2024-07-24 19:06:17.079295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.080 [2024-07-24 19:06:17.079325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.340 [2024-07-24 19:06:17.092627] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.340 [2024-07-24 19:06:17.093656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.340 [2024-07-24 19:06:17.093686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.340 [2024-07-24 19:06:17.106952] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.340 [2024-07-24 19:06:17.107983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.340 [2024-07-24 19:06:17.108013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.340 [2024-07-24 19:06:17.121353] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.340 [2024-07-24 19:06:17.122382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.340 [2024-07-24 19:06:17.122412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.340 [2024-07-24 19:06:17.135679] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.340 [2024-07-24 19:06:17.136716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.340 [2024-07-24 19:06:17.136746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.340 [2024-07-24 19:06:17.150050] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.340 [2024-07-24 19:06:17.151081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.340 [2024-07-24 19:06:17.151112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.340 [2024-07-24 19:06:17.164449] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50bd0) with pdu=0x2000190eee38 00:29:32.340 [2024-07-24 19:06:17.165476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.340 [2024-07-24 19:06:17.165508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.340 00:29:32.340 Latency(us) 00:29:32.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.340 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.340 nvme0n1 : 2.01 17510.15 68.40 0.00 0.00 7295.57 3574.69 19065.02 00:29:32.340 =================================================================================================================== 00:29:32.340 Total : 17510.15 68.40 0.00 0.00 7295.57 3574.69 19065.02 00:29:32.340 0 00:29:32.340 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:32.340 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:32.340 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:32.340 | .driver_specific 00:29:32.340 | .nvme_error 00:29:32.340 | .status_code 00:29:32.340 | .command_transient_transport_error' 00:29:32.340 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:32.599 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2678925 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2678925 ']' 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2678925 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2678925 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2678925' 00:29:32.600 killing process with pid 2678925 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2678925 00:29:32.600 Received shutdown signal, test time was about 2.000000 seconds 00:29:32.600 00:29:32.600 Latency(us) 00:29:32.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.600 =================================================================================================================== 00:29:32.600 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.600 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2678925 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2679758 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2679758 /var/tmp/bperf.sock 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2679758 ']' 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:32.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:32.859 19:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.859 [2024-07-24 19:06:17.775034] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:32.859 [2024-07-24 19:06:17.775095] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679758 ] 00:29:32.859 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:32.859 Zero copy mechanism will not be used. 00:29:32.859 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.859 [2024-07-24 19:06:17.856408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.117 [2024-07-24 19:06:17.962392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.683 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.683 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:33.683 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:33.683 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.250 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:34.250 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.250 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.250 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.250 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.250 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.508 nvme0n1 00:29:34.508 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:34.508 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.508 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.508 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.508 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:34.508 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:34.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:34.766 Zero copy mechanism will not be used. 00:29:34.766 Running I/O for 2 seconds... 00:29:34.766 [2024-07-24 19:06:19.716881] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:34.766 [2024-07-24 19:06:19.717469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.766 [2024-07-24 19:06:19.717511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.766 [2024-07-24 19:06:19.727923] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:34.766 [2024-07-24 19:06:19.728473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.766 [2024-07-24 19:06:19.728509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.766 [2024-07-24 19:06:19.737358] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:34.766 [2024-07-24 19:06:19.737891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.766 [2024-07-24 19:06:19.737924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.766 [2024-07-24 19:06:19.745820] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:34.766 [2024-07-24 19:06:19.746345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.766 [2024-07-24 19:06:19.746379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.766 [2024-07-24 19:06:19.755882] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:34.766 [2024-07-24 19:06:19.756439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.766 [2024-07-24 19:06:19.756471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.766 [2024-07-24 19:06:19.766969] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:34.766 [2024-07-24 19:06:19.767502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.766 [2024-07-24 19:06:19.767541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.776060] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.776635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.776667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.784585] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.785117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.785150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.793857] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.794407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.794438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.802547] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.803090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.803121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.811345] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.811882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.811912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.820042] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.820586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.820625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.829054] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.829580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.829620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.838291] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.838848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.838879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.846829] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.847394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.847426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.854330] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.854848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.854879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.862349] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.862885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.862916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.871088] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.871640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.871673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.881202] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.881756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.881788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.891734] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.892285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.892315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.901418] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.901980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.902012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.909082] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.909599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.909638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.918213] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.918775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.918807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.925661] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.926196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.926226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.933455] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.933980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.934011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.942346] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.942908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.942939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.951940] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.952485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.952517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.962166] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.962730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.962761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.971753] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.972277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.026 [2024-07-24 19:06:19.972309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.026 [2024-07-24 19:06:19.981283] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.026 [2024-07-24 19:06:19.981825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.027 [2024-07-24 19:06:19.981856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.027 [2024-07-24 19:06:19.989889] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.027 [2024-07-24 19:06:19.990437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.027 [2024-07-24 19:06:19.990468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.027 [2024-07-24 19:06:19.998500] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.027 [2024-07-24 19:06:19.999030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.027 [2024-07-24 19:06:19.999066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.027 [2024-07-24 19:06:20.006963] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.027 [2024-07-24 19:06:20.007492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.027 [2024-07-24 19:06:20.007523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.027 [2024-07-24 19:06:20.014898] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.027 [2024-07-24 19:06:20.015422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.027 [2024-07-24 19:06:20.015453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.027 [2024-07-24 19:06:20.023391] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.027 [2024-07-24 19:06:20.023487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.027 [2024-07-24 19:06:20.023517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.027 [2024-07-24 19:06:20.032377] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.027 [2024-07-24 19:06:20.032905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.027 [2024-07-24 19:06:20.032942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.286 [2024-07-24 19:06:20.039858] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.286 [2024-07-24 19:06:20.040375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.286 [2024-07-24 19:06:20.040408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.286 [2024-07-24 19:06:20.049118] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.286 [2024-07-24 19:06:20.049652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.286 [2024-07-24 19:06:20.049685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.286 [2024-07-24 19:06:20.058100] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.286 [2024-07-24 19:06:20.058637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.286 [2024-07-24 19:06:20.058668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.286 [2024-07-24 19:06:20.067492] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.286 [2024-07-24 19:06:20.068049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.286 [2024-07-24 19:06:20.068081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.286 [2024-07-24 19:06:20.076593] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.286 [2024-07-24 19:06:20.077122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.286 [2024-07-24 19:06:20.077154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.286 [2024-07-24 19:06:20.085744] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.286 [2024-07-24 19:06:20.086284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.286 [2024-07-24 19:06:20.086316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.286 [2024-07-24 19:06:20.094402] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.286 [2024-07-24 19:06:20.094934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.286 [2024-07-24 19:06:20.094965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.286 [2024-07-24 19:06:20.103265] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.103801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.103831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.112770] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.113594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.113648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.122877] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.123435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.123469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.132781] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.133341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.133372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.141943] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.142516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.142547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.150409] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.150936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.150967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.157652] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.158177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.158208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.164621] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.165133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.165164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.171231] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.171749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.171781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.177632] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.178182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.178214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.184260] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.184801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.184834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.192067] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.192596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.192635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.200699] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.201235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.201266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.212178] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.212741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.212772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.223144] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.223729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.223765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.233226] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.233766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.233798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.243075] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.243610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.243642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.253771] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.254296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.254326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.264265] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.264813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.264844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.274990] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.287 [2024-07-24 19:06:20.275536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.287 [2024-07-24 19:06:20.275567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.287 [2024-07-24 19:06:20.285735] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.288 [2024-07-24 19:06:20.285912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.288 [2024-07-24 19:06:20.285942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.546 [2024-07-24 19:06:20.296469] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.546 [2024-07-24 19:06:20.297009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.546 [2024-07-24 19:06:20.297041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.546 [2024-07-24 19:06:20.306315] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.546 [2024-07-24 19:06:20.306890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.546 [2024-07-24 19:06:20.306921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.546 [2024-07-24 19:06:20.315476] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.546 [2024-07-24 19:06:20.316047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.316078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.324944] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.325477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.325508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.334375] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.334957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.334988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.342668] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.343196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.343227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.350282] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.350815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.350846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.359220] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.359760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.359791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.369355] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.369885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.369916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.379106] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.379646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.379677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.387996] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.388518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.388556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.396043] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.396575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.396612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.403511] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.404026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.404057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.410811] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.411379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.411409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.421303] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.421472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.421500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.431266] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.431811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.431842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.441757] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.442325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.442356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.450310] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.450404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.450433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.459145] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.459241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.459269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.469600] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.470177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.470207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.478855] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.479400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.479430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.489892] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.490472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.490503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.499960] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.500533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.500564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.547 [2024-07-24 19:06:20.510457] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.547 [2024-07-24 19:06:20.511025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.547 [2024-07-24 19:06:20.511056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.548 [2024-07-24 19:06:20.520250] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.548 [2024-07-24 19:06:20.520784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.548 [2024-07-24 19:06:20.520815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.548 [2024-07-24 19:06:20.530342] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.548 [2024-07-24 19:06:20.530552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.548 [2024-07-24 19:06:20.530581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.548 [2024-07-24 19:06:20.540139] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.548 [2024-07-24 19:06:20.540679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.548 [2024-07-24 19:06:20.540710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.548 [2024-07-24 19:06:20.547702] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.548 [2024-07-24 19:06:20.548240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.548 [2024-07-24 19:06:20.548270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.807 [2024-07-24 19:06:20.557600] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.807 [2024-07-24 19:06:20.557794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.807 [2024-07-24 19:06:20.557824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.807 [2024-07-24 19:06:20.566865] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.807 [2024-07-24 19:06:20.567438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.807 [2024-07-24 19:06:20.567469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.807 [2024-07-24 19:06:20.575846] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.807 [2024-07-24 19:06:20.576373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.807 [2024-07-24 19:06:20.576403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.807 [2024-07-24 19:06:20.584619] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.807 [2024-07-24 19:06:20.585137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.585169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.592902] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.593416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.593448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.601705] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.602236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.602266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.609855] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.610068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.610098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.619859] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.620388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.620418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.629658] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.630188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.630224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.637541] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.638063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.638094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.645770] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.646324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.646354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.652836] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.653375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.653405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.659526] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.660070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.660100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.666576] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.667117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.667148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.673505] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.674043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.674073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.680513] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.681063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.681096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.687371] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.687897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.687927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.694036] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.694555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.694586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.700766] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.701301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.701335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.707367] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.707914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.707946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.714487] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.715028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.715059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.721407] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.721950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.721983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.728173] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.728706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.728737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.734963] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.735497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.735527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.741767] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.742303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.742334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.748860] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.749410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.749442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.755636] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.756165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.756196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.762985] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.763549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.763579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.771628] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.772154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.772185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.808 [2024-07-24 19:06:20.779296] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.808 [2024-07-24 19:06:20.779820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.808 [2024-07-24 19:06:20.779852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.809 [2024-07-24 19:06:20.786612] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.809 [2024-07-24 19:06:20.787128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.809 [2024-07-24 19:06:20.787158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.809 [2024-07-24 19:06:20.793542] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.809 [2024-07-24 19:06:20.794082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.809 [2024-07-24 19:06:20.794112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.809 [2024-07-24 19:06:20.800324] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.809 [2024-07-24 19:06:20.800865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.809 [2024-07-24 19:06:20.800897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.809 [2024-07-24 19:06:20.807136] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.809 [2024-07-24 19:06:20.807658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.809 [2024-07-24 19:06:20.807689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.809 [2024-07-24 19:06:20.814032] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:35.809 [2024-07-24 19:06:20.814546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.809 [2024-07-24 19:06:20.814582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.820810] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.821348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.821379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.827489] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.828031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.828061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.834301] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.834843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.834874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.841036] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.841575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.841616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.848252] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.848789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.848821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.855667] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.856202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.856232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.862384] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.862920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.862950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.869880] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.870402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.870432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.877738] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.877831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.877859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.885105] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.885630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.885662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.892203] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.892732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.892763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.899276] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.899812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.899843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.906274] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.906801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.906833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.914202] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.914722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.914754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.922155] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.922672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.922703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.929238] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.929778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.929809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.936196] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.936730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.936767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.943172] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.943705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.943736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.950013] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.950543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.950574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.956704] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.957242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.957273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.964202] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.964739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.964768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.972162] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.972694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.972726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.980229] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.980769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.068 [2024-07-24 19:06:20.980799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.068 [2024-07-24 19:06:20.988263] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.068 [2024-07-24 19:06:20.988379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:20.988407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:20.996152] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:20.996682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:20.996712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.002742] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.003234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.003265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.010601] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.011060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.011093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.017983] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.018466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.018498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.024637] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.025113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.025144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.031170] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.031645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.031676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.037542] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.038012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.038042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.043862] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.044314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.044343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.050058] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.050507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.050537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.056722] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.057194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.057225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.063451] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.063920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.063951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.069 [2024-07-24 19:06:21.071474] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.069 [2024-07-24 19:06:21.072026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.069 [2024-07-24 19:06:21.072057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.079828] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.080351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.080381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.087860] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.088415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.088445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.096350] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.096882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.096914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.105174] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.105736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.105766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.113610] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.114202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.114234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.121939] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.122414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.122445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.129070] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.129531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.129567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.135714] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.136190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.136221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.142387] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.142867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.142898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.148838] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.149304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.149335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.155036] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.155511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.155541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.161985] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.162504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.162534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.169268] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.169734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.169764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.176434] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.176912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.176944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.182905] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.183379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.183411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.189745] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.190228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.190258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.198001] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.329 [2024-07-24 19:06:21.198477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.329 [2024-07-24 19:06:21.198507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.329 [2024-07-24 19:06:21.206650] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.207134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.207164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.215359] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.215844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.215875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.222792] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.223248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.223278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.229382] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.229857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.229889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.235559] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.236025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.236056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.241796] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.242256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.242286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.248045] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.248502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.248532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.254166] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.254626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.254658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.260352] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.260809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.260840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.266510] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.266995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.267026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.272757] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.273235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.273265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.278988] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.279467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.279497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.285196] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.285676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.285707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.291473] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.291939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.291970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.297844] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.298319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.298349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.304220] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.304693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.304730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.311827] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.312339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.312369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.318785] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.319258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.319289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.325349] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.325827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.325859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.330 [2024-07-24 19:06:21.331553] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.330 [2024-07-24 19:06:21.332027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-07-24 19:06:21.332058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.590 [2024-07-24 19:06:21.337749] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.590 [2024-07-24 19:06:21.338219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.590 [2024-07-24 19:06:21.338249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.590 [2024-07-24 19:06:21.344014] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.590 [2024-07-24 19:06:21.344478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.590 [2024-07-24 19:06:21.344508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.590 [2024-07-24 19:06:21.350157] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.590 [2024-07-24 19:06:21.350614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.590 [2024-07-24 19:06:21.350644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.590 [2024-07-24 19:06:21.356263] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.590 [2024-07-24 19:06:21.356728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.590 [2024-07-24 19:06:21.356761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.590 [2024-07-24 19:06:21.362448] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.590 [2024-07-24 19:06:21.362928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.590 [2024-07-24 19:06:21.362960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.590 [2024-07-24 19:06:21.368655] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.590 [2024-07-24 19:06:21.369128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.590 [2024-07-24 19:06:21.369159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.590 [2024-07-24 19:06:21.374889] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.590 [2024-07-24 19:06:21.375359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.590 [2024-07-24 19:06:21.375390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.381041] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.381497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.381528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.387326] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.387816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.387847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.393866] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.394337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.394368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.401227] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.401702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.401733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.407867] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.408336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.408365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.414257] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.414719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.414754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.420511] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.420985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.421015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.426600] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.427084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.427116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.432666] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.433135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.433166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.438807] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.439273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.439305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.444909] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.445363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.445394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.450925] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.451401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.451431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.457544] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.458016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.458047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.465315] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.465843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.465874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.473238] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.473716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.473747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.481296] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.481801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.481831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.489588] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.490081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.490111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.497812] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.498338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.498368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.505317] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.505792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.505823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.511636] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.512116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.512146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.518598] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.519069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.519100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.524991] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.525439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.525470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.532106] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.532578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.532616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.540252] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.591 [2024-07-24 19:06:21.540726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.591 [2024-07-24 19:06:21.540756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.591 [2024-07-24 19:06:21.548389] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.592 [2024-07-24 19:06:21.548949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.592 [2024-07-24 19:06:21.548980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.592 [2024-07-24 19:06:21.555804] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.592 [2024-07-24 19:06:21.556270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.592 [2024-07-24 19:06:21.556300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.592 [2024-07-24 19:06:21.562228] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.592 [2024-07-24 19:06:21.562685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.592 [2024-07-24 19:06:21.562716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.592 [2024-07-24 19:06:21.568995] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.592 [2024-07-24 19:06:21.569469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.592 [2024-07-24 19:06:21.569498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.592 [2024-07-24 19:06:21.575590] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.592 [2024-07-24 19:06:21.576068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.592 [2024-07-24 19:06:21.576097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.592 [2024-07-24 19:06:21.581819] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.592 [2024-07-24 19:06:21.582292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.592 [2024-07-24 19:06:21.582321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.592 [2024-07-24 19:06:21.589116] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.592 [2024-07-24 19:06:21.589646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.592 [2024-07-24 19:06:21.589676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.592 [2024-07-24 19:06:21.597052] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.592 [2024-07-24 19:06:21.597515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.592 [2024-07-24 19:06:21.597551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.851 [2024-07-24 19:06:21.604746] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.851 [2024-07-24 19:06:21.605196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.851 [2024-07-24 19:06:21.605226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.851 [2024-07-24 19:06:21.613027] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.851 [2024-07-24 19:06:21.613506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.851 [2024-07-24 19:06:21.613536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.851 [2024-07-24 19:06:21.622161] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.851 [2024-07-24 19:06:21.622639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.851 [2024-07-24 19:06:21.622669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.851 [2024-07-24 19:06:21.629854] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.851 [2024-07-24 19:06:21.630306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.851 [2024-07-24 19:06:21.630336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.851 [2024-07-24 19:06:21.638262] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.851 [2024-07-24 19:06:21.638746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.851 [2024-07-24 19:06:21.638778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.851 [2024-07-24 19:06:21.646350] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.851 [2024-07-24 19:06:21.646830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.851 [2024-07-24 19:06:21.646860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.851 [2024-07-24 19:06:21.654223] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.851 [2024-07-24 19:06:21.654696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.851 [2024-07-24 19:06:21.654726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.851 [2024-07-24 19:06:21.661966] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.851 [2024-07-24 19:06:21.662427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.851 [2024-07-24 19:06:21.662456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.851 [2024-07-24 19:06:21.668773] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.851 [2024-07-24 19:06:21.669264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.851 [2024-07-24 19:06:21.669294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.852 [2024-07-24 19:06:21.675645] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.852 [2024-07-24 19:06:21.676121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.852 [2024-07-24 19:06:21.676150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.852 [2024-07-24 19:06:21.682362] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.852 [2024-07-24 19:06:21.682833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.852 [2024-07-24 19:06:21.682864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.852 [2024-07-24 19:06:21.688782] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.852 [2024-07-24 19:06:21.689241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.852 [2024-07-24 19:06:21.689272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.852 [2024-07-24 19:06:21.695018] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.852 [2024-07-24 19:06:21.695470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.852 [2024-07-24 19:06:21.695501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.852 [2024-07-24 19:06:21.701936] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d50d70) with pdu=0x2000190fef90 00:29:36.852 [2024-07-24 19:06:21.702384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.852 [2024-07-24 19:06:21.702414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.852 00:29:36.852 Latency(us) 00:29:36.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.852 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:36.852 nvme0n1 : 2.00 3930.71 491.34 0.00 0.00 4062.51 2889.54 11379.43 00:29:36.852 =================================================================================================================== 00:29:36.852 Total : 3930.71 491.34 0.00 0.00 4062.51 2889.54 11379.43 00:29:36.852 0 00:29:36.852 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:36.852 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:36.852 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:36.852 | .driver_specific 00:29:36.852 | .nvme_error 00:29:36.852 | .status_code 00:29:36.852 | .command_transient_transport_error' 00:29:36.852 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:37.113 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 253 > 0 )) 00:29:37.113 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2679758 00:29:37.113 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2679758 ']' 00:29:37.113 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2679758 00:29:37.113 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:37.113 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.113 19:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2679758 00:29:37.113 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:37.113 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:37.113 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2679758' 00:29:37.113 killing process with pid 2679758 00:29:37.113 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2679758 00:29:37.113 Received shutdown signal, test time was about 2.000000 seconds 00:29:37.113 00:29:37.113 Latency(us) 00:29:37.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.113 =================================================================================================================== 00:29:37.113 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.113 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2679758 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2676924 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2676924 ']' 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2676924 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2676924 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2676924' 00:29:37.545 killing process with pid 2676924 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2676924 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2676924 00:29:37.545 00:29:37.545 real 0m18.940s 00:29:37.545 user 0m38.961s 00:29:37.545 sys 0m4.226s 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:37.545 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.545 ************************************ 00:29:37.545 END TEST nvmf_digest_error 00:29:37.545 ************************************ 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:37.804 rmmod nvme_tcp 00:29:37.804 rmmod nvme_fabrics 00:29:37.804 rmmod nvme_keyring 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2676924 ']' 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2676924 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2676924 ']' 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2676924 00:29:37.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2676924) - No such process 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2676924 is not found' 00:29:37.804 Process with pid 2676924 is not found 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.804 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.708 19:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.708 00:29:39.708 real 0m45.387s 00:29:39.708 user 1m17.987s 00:29:39.708 sys 0m13.087s 00:29:39.708 19:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.708 19:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:39.708 ************************************ 00:29:39.708 END TEST nvmf_digest 00:29:39.708 ************************************ 00:29:39.967 19:06:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:39.967 19:06:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:39.967 19:06:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:39.967 19:06:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:39.967 19:06:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:39.967 19:06:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.967 19:06:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.967 ************************************ 00:29:39.967 START TEST nvmf_bdevperf 00:29:39.967 ************************************ 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:39.968 * Looking for test storage... 00:29:39.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:39.968 19:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.539 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:46.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:46.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:46.540 Found net devices under 0000:af:00.0: cvl_0_0 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:46.540 Found net devices under 0000:af:00.1: cvl_0_1 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:46.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:29:46.540 00:29:46.540 --- 10.0.0.2 ping statistics --- 00:29:46.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.540 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:29:46.540 00:29:46.540 --- 10.0.0.1 ping statistics --- 00:29:46.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.540 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:46.540 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2684243 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2684243 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2684243 ']' 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:46.541 19:06:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.541 [2024-07-24 19:06:30.750841] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:46.541 [2024-07-24 19:06:30.750899] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.541 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.541 [2024-07-24 19:06:30.838320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:46.541 [2024-07-24 19:06:30.944173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.541 [2024-07-24 19:06:30.944220] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.541 [2024-07-24 19:06:30.944232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.541 [2024-07-24 19:06:30.944243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.541 [2024-07-24 19:06:30.944253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.541 [2024-07-24 19:06:30.944376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.541 [2024-07-24 19:06:30.944408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.541 [2024-07-24 19:06:30.944409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.800 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:46.800 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:46.800 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:46.800 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.801 [2024-07-24 19:06:31.734937] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.801 Malloc0 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.801 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:46.801 [2024-07-24 19:06:31.808365] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:47.060 { 00:29:47.060 "params": { 00:29:47.060 "name": "Nvme$subsystem", 00:29:47.060 "trtype": "$TEST_TRANSPORT", 00:29:47.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:47.060 "adrfam": "ipv4", 00:29:47.060 "trsvcid": "$NVMF_PORT", 00:29:47.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:47.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:47.060 "hdgst": ${hdgst:-false}, 00:29:47.060 "ddgst": ${ddgst:-false} 00:29:47.060 }, 00:29:47.060 "method": "bdev_nvme_attach_controller" 00:29:47.060 } 00:29:47.060 EOF 00:29:47.060 )") 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:47.060 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:47.060 "params": { 00:29:47.060 "name": "Nvme1", 00:29:47.060 "trtype": "tcp", 00:29:47.060 "traddr": "10.0.0.2", 00:29:47.060 "adrfam": "ipv4", 00:29:47.060 "trsvcid": "4420", 00:29:47.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:47.060 "hdgst": false, 00:29:47.060 "ddgst": false 00:29:47.060 }, 00:29:47.060 "method": "bdev_nvme_attach_controller" 00:29:47.060 }' 00:29:47.060 [2024-07-24 19:06:31.861900] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:47.060 [2024-07-24 19:06:31.861962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684460 ] 00:29:47.060 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.060 [2024-07-24 19:06:31.943617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.060 [2024-07-24 19:06:32.030204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.319 Running I/O for 1 seconds... 00:29:48.256 00:29:48.256 Latency(us) 00:29:48.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.256 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:48.256 Verification LBA range: start 0x0 length 0x4000 00:29:48.256 Nvme1n1 : 1.01 5994.43 23.42 0.00 0.00 21248.25 1571.37 18707.55 00:29:48.256 =================================================================================================================== 00:29:48.256 Total : 5994.43 23.42 0.00 0.00 21248.25 1571.37 18707.55 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2684791 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.515 { 00:29:48.515 "params": { 00:29:48.515 "name": "Nvme$subsystem", 00:29:48.515 "trtype": "$TEST_TRANSPORT", 00:29:48.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.515 "adrfam": "ipv4", 00:29:48.515 "trsvcid": "$NVMF_PORT", 00:29:48.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.515 "hdgst": ${hdgst:-false}, 00:29:48.515 "ddgst": ${ddgst:-false} 00:29:48.515 }, 00:29:48.515 "method": "bdev_nvme_attach_controller" 00:29:48.515 } 00:29:48.515 EOF 00:29:48.515 )") 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:48.515 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:48.515 "params": { 00:29:48.515 "name": "Nvme1", 00:29:48.515 "trtype": "tcp", 00:29:48.515 "traddr": "10.0.0.2", 00:29:48.515 "adrfam": "ipv4", 00:29:48.515 "trsvcid": "4420", 00:29:48.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.515 "hdgst": false, 00:29:48.515 "ddgst": false 00:29:48.515 }, 00:29:48.516 "method": "bdev_nvme_attach_controller" 00:29:48.516 }' 00:29:48.516 [2024-07-24 19:06:33.496622] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:48.516 [2024-07-24 19:06:33.496689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684791 ] 00:29:48.775 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.775 [2024-07-24 19:06:33.577967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.775 [2024-07-24 19:06:33.660760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.034 Running I/O for 15 seconds... 00:29:51.571 19:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2684243 00:29:51.571 19:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:51.571 [2024-07-24 19:06:36.464910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.464956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.464981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.464995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.571 [2024-07-24 19:06:36.465846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.571 [2024-07-24 19:06:36.465856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.465868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.465877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.465889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.465899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.465911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.465924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.465936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.465946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.465959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.465969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.465981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.465990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.572 [2024-07-24 19:06:36.466253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.572 [2024-07-24 19:06:36.466712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.572 [2024-07-24 19:06:36.466724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.466977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.466988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.573 [2024-07-24 19:06:36.467587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.573 [2024-07-24 19:06:36.467599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.467988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.467999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.574 [2024-07-24 19:06:36.468014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.468026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.574 [2024-07-24 19:06:36.468036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.468047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2609030 is same with the state(6) to be set 00:29:51.574 [2024-07-24 19:06:36.468060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.574 [2024-07-24 19:06:36.468068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.574 [2024-07-24 19:06:36.468077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126448 len:8 PRP1 0x0 PRP2 0x0 00:29:51.574 [2024-07-24 19:06:36.468088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.468138] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2609030 was disconnected and freed. reset controller. 00:29:51.574 [2024-07-24 19:06:36.468194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.574 [2024-07-24 19:06:36.468207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.468219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.574 [2024-07-24 19:06:36.468230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.468241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.574 [2024-07-24 19:06:36.468251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.468261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.574 [2024-07-24 19:06:36.468272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.574 [2024-07-24 19:06:36.468281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.574 [2024-07-24 19:06:36.472514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.574 [2024-07-24 19:06:36.472545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.574 [2024-07-24 19:06:36.473244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.574 [2024-07-24 19:06:36.473266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.574 [2024-07-24 19:06:36.473277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.574 [2024-07-24 19:06:36.473543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.574 [2024-07-24 19:06:36.473814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.574 [2024-07-24 19:06:36.473827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.574 [2024-07-24 19:06:36.473837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.574 [2024-07-24 19:06:36.478093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.574 [2024-07-24 19:06:36.487389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.574 [2024-07-24 19:06:36.487964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.574 [2024-07-24 19:06:36.487989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.574 [2024-07-24 19:06:36.488000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.574 [2024-07-24 19:06:36.488266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.574 [2024-07-24 19:06:36.488533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.574 [2024-07-24 19:06:36.488545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.574 [2024-07-24 19:06:36.488555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.574 [2024-07-24 19:06:36.492825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.574 [2024-07-24 19:06:36.502120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.574 [2024-07-24 19:06:36.502683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.574 [2024-07-24 19:06:36.502708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.574 [2024-07-24 19:06:36.502718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.574 [2024-07-24 19:06:36.502983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.574 [2024-07-24 19:06:36.503249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.574 [2024-07-24 19:06:36.503262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.574 [2024-07-24 19:06:36.503273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.574 [2024-07-24 19:06:36.507534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.574 [2024-07-24 19:06:36.516826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.574 [2024-07-24 19:06:36.517431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.575 [2024-07-24 19:06:36.517475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.575 [2024-07-24 19:06:36.517498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.575 [2024-07-24 19:06:36.518051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.575 [2024-07-24 19:06:36.518319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.575 [2024-07-24 19:06:36.518332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.575 [2024-07-24 19:06:36.518342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.575 [2024-07-24 19:06:36.522583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.575 [2024-07-24 19:06:36.531599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.575 [2024-07-24 19:06:36.532189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.575 [2024-07-24 19:06:36.532232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.575 [2024-07-24 19:06:36.532262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.575 [2024-07-24 19:06:36.532804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.575 [2024-07-24 19:06:36.533071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.575 [2024-07-24 19:06:36.533084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.575 [2024-07-24 19:06:36.533094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.575 [2024-07-24 19:06:36.537337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.575 [2024-07-24 19:06:36.546360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.575 [2024-07-24 19:06:36.546927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.575 [2024-07-24 19:06:36.546950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.575 [2024-07-24 19:06:36.546961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.575 [2024-07-24 19:06:36.547225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.575 [2024-07-24 19:06:36.547491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.575 [2024-07-24 19:06:36.547504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.575 [2024-07-24 19:06:36.547513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.575 [2024-07-24 19:06:36.551763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.575 [2024-07-24 19:06:36.561030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.575 [2024-07-24 19:06:36.561561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.575 [2024-07-24 19:06:36.561583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.575 [2024-07-24 19:06:36.561594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.575 [2024-07-24 19:06:36.561866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.575 [2024-07-24 19:06:36.562135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.575 [2024-07-24 19:06:36.562147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.575 [2024-07-24 19:06:36.562157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.575 [2024-07-24 19:06:36.566401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.575 [2024-07-24 19:06:36.575671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.575 [2024-07-24 19:06:36.576251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.575 [2024-07-24 19:06:36.576274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.575 [2024-07-24 19:06:36.576284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.575 [2024-07-24 19:06:36.576549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.835 [2024-07-24 19:06:36.576823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.835 [2024-07-24 19:06:36.576841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.835 [2024-07-24 19:06:36.576852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.835 [2024-07-24 19:06:36.581108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.835 [2024-07-24 19:06:36.590378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.835 [2024-07-24 19:06:36.590959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.835 [2024-07-24 19:06:36.590982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.835 [2024-07-24 19:06:36.590993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.835 [2024-07-24 19:06:36.591258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.835 [2024-07-24 19:06:36.591524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.835 [2024-07-24 19:06:36.591537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.835 [2024-07-24 19:06:36.591547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.835 [2024-07-24 19:06:36.595794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.835 [2024-07-24 19:06:36.605065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.835 [2024-07-24 19:06:36.605656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.835 [2024-07-24 19:06:36.605701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.835 [2024-07-24 19:06:36.605724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.835 [2024-07-24 19:06:36.606070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.835 [2024-07-24 19:06:36.606336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.835 [2024-07-24 19:06:36.606349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.835 [2024-07-24 19:06:36.606358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.835 [2024-07-24 19:06:36.610612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.835 [2024-07-24 19:06:36.619627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.835 [2024-07-24 19:06:36.620220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.835 [2024-07-24 19:06:36.620263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.835 [2024-07-24 19:06:36.620285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.835 [2024-07-24 19:06:36.620879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.835 [2024-07-24 19:06:36.621447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.835 [2024-07-24 19:06:36.621465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.835 [2024-07-24 19:06:36.621479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.835 [2024-07-24 19:06:36.627722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.835 [2024-07-24 19:06:36.634982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.835 [2024-07-24 19:06:36.635543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.835 [2024-07-24 19:06:36.635565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.835 [2024-07-24 19:06:36.635576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.835 [2024-07-24 19:06:36.635847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.835 [2024-07-24 19:06:36.636114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.835 [2024-07-24 19:06:36.636127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.835 [2024-07-24 19:06:36.636136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.835 [2024-07-24 19:06:36.640378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.835 [2024-07-24 19:06:36.649653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.835 [2024-07-24 19:06:36.650255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.835 [2024-07-24 19:06:36.650297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.835 [2024-07-24 19:06:36.650319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.835 [2024-07-24 19:06:36.650912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.835 [2024-07-24 19:06:36.651389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.835 [2024-07-24 19:06:36.651403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.835 [2024-07-24 19:06:36.651413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.835 [2024-07-24 19:06:36.657219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.836 [2024-07-24 19:06:36.664926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.836 [2024-07-24 19:06:36.665403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.836 [2024-07-24 19:06:36.665446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.836 [2024-07-24 19:06:36.665468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.836 [2024-07-24 19:06:36.666062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.836 [2024-07-24 19:06:36.666350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.836 [2024-07-24 19:06:36.666363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.836 [2024-07-24 19:06:36.666372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.836 [2024-07-24 19:06:36.670618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.836 [2024-07-24 19:06:36.679639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.836 [2024-07-24 19:06:36.680220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.836 [2024-07-24 19:06:36.680242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.836 [2024-07-24 19:06:36.680253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.836 [2024-07-24 19:06:36.680521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.836 [2024-07-24 19:06:36.680794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.836 [2024-07-24 19:06:36.680808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.836 [2024-07-24 19:06:36.680817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.836 [2024-07-24 19:06:36.685063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.836 [2024-07-24 19:06:36.694335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.836 [2024-07-24 19:06:36.694902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.836 [2024-07-24 19:06:36.694924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.836 [2024-07-24 19:06:36.694935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.836 [2024-07-24 19:06:36.695198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.836 [2024-07-24 19:06:36.695465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.836 [2024-07-24 19:06:36.695478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.836 [2024-07-24 19:06:36.695487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.836 [2024-07-24 19:06:36.699732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.836 [2024-07-24 19:06:36.709005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.836 [2024-07-24 19:06:36.709534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.836 [2024-07-24 19:06:36.709578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.836 [2024-07-24 19:06:36.709600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.836 [2024-07-24 19:06:36.710192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.836 [2024-07-24 19:06:36.710498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.836 [2024-07-24 19:06:36.710511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.836 [2024-07-24 19:06:36.710521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.836 [2024-07-24 19:06:36.714771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.836 [2024-07-24 19:06:36.723544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.836 [2024-07-24 19:06:36.724131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.836 [2024-07-24 19:06:36.724154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.836 [2024-07-24 19:06:36.724165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.836 [2024-07-24 19:06:36.724429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.836 [2024-07-24 19:06:36.724701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.836 [2024-07-24 19:06:36.724714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.836 [2024-07-24 19:06:36.724728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.836 [2024-07-24 19:06:36.728983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.836 [2024-07-24 19:06:36.738263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.836 [2024-07-24 19:06:36.738845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.836 [2024-07-24 19:06:36.738869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.836 [2024-07-24 19:06:36.738880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.836 [2024-07-24 19:06:36.739144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.836 [2024-07-24 19:06:36.739409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.836 [2024-07-24 19:06:36.739423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.836 [2024-07-24 19:06:36.739432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.836 [2024-07-24 19:06:36.743683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.836 [2024-07-24 19:06:36.752955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.836 [2024-07-24 19:06:36.753440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.836 [2024-07-24 19:06:36.753463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.836 [2024-07-24 19:06:36.753474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.836 [2024-07-24 19:06:36.753744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.836 [2024-07-24 19:06:36.754010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.836 [2024-07-24 19:06:36.754023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.836 [2024-07-24 19:06:36.754034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.836 [2024-07-24 19:06:36.758280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.836 [2024-07-24 19:06:36.767556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.836 [2024-07-24 19:06:36.768143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.836 [2024-07-24 19:06:36.768165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.836 [2024-07-24 19:06:36.768176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.836 [2024-07-24 19:06:36.768440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.836 [2024-07-24 19:06:36.768713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.836 [2024-07-24 19:06:36.768726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.836 [2024-07-24 19:06:36.768736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.836 [2024-07-24 19:06:36.772981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.836 [2024-07-24 19:06:36.782251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.836 [2024-07-24 19:06:36.782765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.836 [2024-07-24 19:06:36.782791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.836 [2024-07-24 19:06:36.782802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.836 [2024-07-24 19:06:36.783066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.836 [2024-07-24 19:06:36.783331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.836 [2024-07-24 19:06:36.783345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.836 [2024-07-24 19:06:36.783355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.836 [2024-07-24 19:06:36.787609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.836 [2024-07-24 19:06:36.796882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.836 [2024-07-24 19:06:36.797348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.836 [2024-07-24 19:06:36.797371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.836 [2024-07-24 19:06:36.797382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.836 [2024-07-24 19:06:36.797652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.836 [2024-07-24 19:06:36.797920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.836 [2024-07-24 19:06:36.797933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.837 [2024-07-24 19:06:36.797943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.837 [2024-07-24 19:06:36.802186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.837 [2024-07-24 19:06:36.811467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.837 [2024-07-24 19:06:36.812051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.837 [2024-07-24 19:06:36.812074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.837 [2024-07-24 19:06:36.812085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.837 [2024-07-24 19:06:36.812350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.837 [2024-07-24 19:06:36.812624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.837 [2024-07-24 19:06:36.812637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.837 [2024-07-24 19:06:36.812648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.837 [2024-07-24 19:06:36.816893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.837 [2024-07-24 19:06:36.826161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.837 [2024-07-24 19:06:36.826757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.837 [2024-07-24 19:06:36.826800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.837 [2024-07-24 19:06:36.826823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.837 [2024-07-24 19:06:36.827401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.837 [2024-07-24 19:06:36.827751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.837 [2024-07-24 19:06:36.827765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.837 [2024-07-24 19:06:36.827775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.837 [2024-07-24 19:06:36.832012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.837 [2024-07-24 19:06:36.840784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.837 [2024-07-24 19:06:36.841352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.837 [2024-07-24 19:06:36.841395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:51.837 [2024-07-24 19:06:36.841416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:51.837 [2024-07-24 19:06:36.841991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:51.837 [2024-07-24 19:06:36.842260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.837 [2024-07-24 19:06:36.842273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.837 [2024-07-24 19:06:36.842283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.098 [2024-07-24 19:06:36.846525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.098 [2024-07-24 19:06:36.855539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.098 [2024-07-24 19:06:36.856016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.098 [2024-07-24 19:06:36.856058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.098 [2024-07-24 19:06:36.856080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.098 [2024-07-24 19:06:36.856626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.098 [2024-07-24 19:06:36.856893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.098 [2024-07-24 19:06:36.856906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.098 [2024-07-24 19:06:36.856916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.098 [2024-07-24 19:06:36.861153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.098 [2024-07-24 19:06:36.870360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.098 [2024-07-24 19:06:36.870957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.098 [2024-07-24 19:06:36.871004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.098 [2024-07-24 19:06:36.871028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.098 [2024-07-24 19:06:36.871538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.098 [2024-07-24 19:06:36.871813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.098 [2024-07-24 19:06:36.871826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.098 [2024-07-24 19:06:36.871837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.098 [2024-07-24 19:06:36.876088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.098 [2024-07-24 19:06:36.885119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.098 [2024-07-24 19:06:36.885705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.098 [2024-07-24 19:06:36.885750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.098 [2024-07-24 19:06:36.885774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.098 [2024-07-24 19:06:36.886354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.098 [2024-07-24 19:06:36.886807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.098 [2024-07-24 19:06:36.886820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.098 [2024-07-24 19:06:36.886830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.098 [2024-07-24 19:06:36.891077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.098 [2024-07-24 19:06:36.899842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.098 [2024-07-24 19:06:36.900457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.098 [2024-07-24 19:06:36.900500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.098 [2024-07-24 19:06:36.900523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.098 [2024-07-24 19:06:36.901019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.098 [2024-07-24 19:06:36.901286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.098 [2024-07-24 19:06:36.901300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.098 [2024-07-24 19:06:36.901309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.098 [2024-07-24 19:06:36.905617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.098 [2024-07-24 19:06:36.914377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.098 [2024-07-24 19:06:36.914979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.098 [2024-07-24 19:06:36.915024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.098 [2024-07-24 19:06:36.915047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.098 [2024-07-24 19:06:36.915641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.098 [2024-07-24 19:06:36.915946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.098 [2024-07-24 19:06:36.915960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.098 [2024-07-24 19:06:36.915970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.098 [2024-07-24 19:06:36.920211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.098 [2024-07-24 19:06:36.928982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.098 [2024-07-24 19:06:36.929568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.098 [2024-07-24 19:06:36.929590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.098 [2024-07-24 19:06:36.929612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.098 [2024-07-24 19:06:36.929877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.098 [2024-07-24 19:06:36.930143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.098 [2024-07-24 19:06:36.930156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.098 [2024-07-24 19:06:36.930166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.098 [2024-07-24 19:06:36.934409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.098 [2024-07-24 19:06:36.943678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.098 [2024-07-24 19:06:36.944238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.098 [2024-07-24 19:06:36.944260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.098 [2024-07-24 19:06:36.944270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.098 [2024-07-24 19:06:36.944535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.098 [2024-07-24 19:06:36.944810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.098 [2024-07-24 19:06:36.944824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.098 [2024-07-24 19:06:36.944833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.098 [2024-07-24 19:06:36.949080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.098 [2024-07-24 19:06:36.958346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.098 [2024-07-24 19:06:36.958946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.098 [2024-07-24 19:06:36.958990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.098 [2024-07-24 19:06:36.959012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.098 [2024-07-24 19:06:36.959591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.098 [2024-07-24 19:06:36.960150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.098 [2024-07-24 19:06:36.960163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.098 [2024-07-24 19:06:36.960173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.098 [2024-07-24 19:06:36.964418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.098 [2024-07-24 19:06:36.972926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.098 [2024-07-24 19:06:36.973509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.098 [2024-07-24 19:06:36.973551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.098 [2024-07-24 19:06:36.973573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.098 [2024-07-24 19:06:36.974112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.098 [2024-07-24 19:06:36.974378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.098 [2024-07-24 19:06:36.974395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.098 [2024-07-24 19:06:36.974405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.098 [2024-07-24 19:06:36.978658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.098 [2024-07-24 19:06:36.987673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.098 [2024-07-24 19:06:36.988260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.098 [2024-07-24 19:06:36.988302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.099 [2024-07-24 19:06:36.988324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.099 [2024-07-24 19:06:36.988916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.099 [2024-07-24 19:06:36.989378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.099 [2024-07-24 19:06:36.989391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.099 [2024-07-24 19:06:36.989401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.099 [2024-07-24 19:06:36.993644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.099 [2024-07-24 19:06:37.002406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.099 [2024-07-24 19:06:37.002970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.099 [2024-07-24 19:06:37.003013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.099 [2024-07-24 19:06:37.003035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.099 [2024-07-24 19:06:37.003641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.099 [2024-07-24 19:06:37.004188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.099 [2024-07-24 19:06:37.004201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.099 [2024-07-24 19:06:37.004210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.099 [2024-07-24 19:06:37.008447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.099 [2024-07-24 19:06:37.016968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.099 [2024-07-24 19:06:37.017437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.099 [2024-07-24 19:06:37.017460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.099 [2024-07-24 19:06:37.017470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.099 [2024-07-24 19:06:37.017741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.099 [2024-07-24 19:06:37.018007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.099 [2024-07-24 19:06:37.018020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.099 [2024-07-24 19:06:37.018030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.099 [2024-07-24 19:06:37.022273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.099 [2024-07-24 19:06:37.031544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.099 [2024-07-24 19:06:37.032130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.099 [2024-07-24 19:06:37.032153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.099 [2024-07-24 19:06:37.032163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.099 [2024-07-24 19:06:37.032427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.099 [2024-07-24 19:06:37.032700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.099 [2024-07-24 19:06:37.032713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.099 [2024-07-24 19:06:37.032723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.099 [2024-07-24 19:06:37.036960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.099 [2024-07-24 19:06:37.046225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.099 [2024-07-24 19:06:37.046732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.099 [2024-07-24 19:06:37.046776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.099 [2024-07-24 19:06:37.046799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.099 [2024-07-24 19:06:37.047359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.099 [2024-07-24 19:06:37.047631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.099 [2024-07-24 19:06:37.047645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.099 [2024-07-24 19:06:37.047655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.099 [2024-07-24 19:06:37.051899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.099 [2024-07-24 19:06:37.060914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.099 [2024-07-24 19:06:37.061464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.099 [2024-07-24 19:06:37.061486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.099 [2024-07-24 19:06:37.061498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.099 [2024-07-24 19:06:37.061770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.099 [2024-07-24 19:06:37.062037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.099 [2024-07-24 19:06:37.062050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.099 [2024-07-24 19:06:37.062060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.099 [2024-07-24 19:06:37.066298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.099 [2024-07-24 19:06:37.075564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.099 [2024-07-24 19:06:37.076156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.099 [2024-07-24 19:06:37.076198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.099 [2024-07-24 19:06:37.076220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.099 [2024-07-24 19:06:37.076826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.099 [2024-07-24 19:06:37.077386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.099 [2024-07-24 19:06:37.077398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.099 [2024-07-24 19:06:37.077408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.099 [2024-07-24 19:06:37.081666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.099 [2024-07-24 19:06:37.090178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.099 [2024-07-24 19:06:37.090767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.099 [2024-07-24 19:06:37.090811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.099 [2024-07-24 19:06:37.090833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.099 [2024-07-24 19:06:37.091367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.099 [2024-07-24 19:06:37.091639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.099 [2024-07-24 19:06:37.091653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.099 [2024-07-24 19:06:37.091663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.099 [2024-07-24 19:06:37.095906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.099 [2024-07-24 19:06:37.104932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.360 [2024-07-24 19:06:37.105507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.360 [2024-07-24 19:06:37.105529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.360 [2024-07-24 19:06:37.105539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.360 [2024-07-24 19:06:37.105811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.360 [2024-07-24 19:06:37.106077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.360 [2024-07-24 19:06:37.106089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.360 [2024-07-24 19:06:37.106099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.360 [2024-07-24 19:06:37.110342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.360 [2024-07-24 19:06:37.119610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.360 [2024-07-24 19:06:37.120194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.360 [2024-07-24 19:06:37.120216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.360 [2024-07-24 19:06:37.120227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.360 [2024-07-24 19:06:37.120491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.360 [2024-07-24 19:06:37.120762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.360 [2024-07-24 19:06:37.120776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.360 [2024-07-24 19:06:37.120790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.360 [2024-07-24 19:06:37.125033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.360 [2024-07-24 19:06:37.134301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.360 [2024-07-24 19:06:37.134895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.360 [2024-07-24 19:06:37.134937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.360 [2024-07-24 19:06:37.134959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.360 [2024-07-24 19:06:37.135499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.360 [2024-07-24 19:06:37.135772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.360 [2024-07-24 19:06:37.135785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.360 [2024-07-24 19:06:37.135795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.360 [2024-07-24 19:06:37.140032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.360 [2024-07-24 19:06:37.149037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.360 [2024-07-24 19:06:37.149629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.360 [2024-07-24 19:06:37.149673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.360 [2024-07-24 19:06:37.149695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.360 [2024-07-24 19:06:37.150272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.360 [2024-07-24 19:06:37.150565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.360 [2024-07-24 19:06:37.150578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.360 [2024-07-24 19:06:37.150588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.360 [2024-07-24 19:06:37.154841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.360 [2024-07-24 19:06:37.163620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.360 [2024-07-24 19:06:37.164112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.360 [2024-07-24 19:06:37.164154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.360 [2024-07-24 19:06:37.164177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.360 [2024-07-24 19:06:37.164694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.360 [2024-07-24 19:06:37.164961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.360 [2024-07-24 19:06:37.164974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.360 [2024-07-24 19:06:37.164983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.360 [2024-07-24 19:06:37.169221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.360 [2024-07-24 19:06:37.178261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.360 [2024-07-24 19:06:37.178780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.360 [2024-07-24 19:06:37.178824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.360 [2024-07-24 19:06:37.178846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.360 [2024-07-24 19:06:37.179424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.360 [2024-07-24 19:06:37.179777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.360 [2024-07-24 19:06:37.179791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.360 [2024-07-24 19:06:37.179801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.360 [2024-07-24 19:06:37.184045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.360 [2024-07-24 19:06:37.192804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.360 [2024-07-24 19:06:37.193360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.360 [2024-07-24 19:06:37.193382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.360 [2024-07-24 19:06:37.193392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.361 [2024-07-24 19:06:37.193664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.361 [2024-07-24 19:06:37.193931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.361 [2024-07-24 19:06:37.193944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.361 [2024-07-24 19:06:37.193954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.361 [2024-07-24 19:06:37.198201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.361 [2024-07-24 19:06:37.207471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.361 [2024-07-24 19:06:37.208054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.361 [2024-07-24 19:06:37.208077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.361 [2024-07-24 19:06:37.208088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.361 [2024-07-24 19:06:37.208352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.361 [2024-07-24 19:06:37.208624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.361 [2024-07-24 19:06:37.208637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.361 [2024-07-24 19:06:37.208648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.361 [2024-07-24 19:06:37.212889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.361 [2024-07-24 19:06:37.222145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.361 [2024-07-24 19:06:37.222727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.361 [2024-07-24 19:06:37.222749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.361 [2024-07-24 19:06:37.222759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.361 [2024-07-24 19:06:37.223027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.361 [2024-07-24 19:06:37.223292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.361 [2024-07-24 19:06:37.223305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.361 [2024-07-24 19:06:37.223314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.361 [2024-07-24 19:06:37.227575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.361 [2024-07-24 19:06:37.236841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.361 [2024-07-24 19:06:37.237400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.361 [2024-07-24 19:06:37.237422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.361 [2024-07-24 19:06:37.237432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.361 [2024-07-24 19:06:37.237703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.361 [2024-07-24 19:06:37.237968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.361 [2024-07-24 19:06:37.237981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.361 [2024-07-24 19:06:37.237990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.361 [2024-07-24 19:06:37.242234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.361 [2024-07-24 19:06:37.251499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.361 [2024-07-24 19:06:37.252082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.361 [2024-07-24 19:06:37.252104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.361 [2024-07-24 19:06:37.252115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.361 [2024-07-24 19:06:37.252378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.361 [2024-07-24 19:06:37.252652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.361 [2024-07-24 19:06:37.252666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.361 [2024-07-24 19:06:37.252675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.361 [2024-07-24 19:06:37.256921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.361 [2024-07-24 19:06:37.266178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.361 [2024-07-24 19:06:37.266760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.361 [2024-07-24 19:06:37.266782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.361 [2024-07-24 19:06:37.266792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.361 [2024-07-24 19:06:37.267058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.361 [2024-07-24 19:06:37.267324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.361 [2024-07-24 19:06:37.267337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.361 [2024-07-24 19:06:37.267350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.361 [2024-07-24 19:06:37.271595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.361 [2024-07-24 19:06:37.280865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.361 [2024-07-24 19:06:37.281389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.361 [2024-07-24 19:06:37.281432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.361 [2024-07-24 19:06:37.281454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.361 [2024-07-24 19:06:37.281987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.361 [2024-07-24 19:06:37.282255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.361 [2024-07-24 19:06:37.282268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.361 [2024-07-24 19:06:37.282278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.361 [2024-07-24 19:06:37.286523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.361 [2024-07-24 19:06:37.295583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.361 [2024-07-24 19:06:37.296167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.361 [2024-07-24 19:06:37.296211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.361 [2024-07-24 19:06:37.296233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.361 [2024-07-24 19:06:37.296753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.361 [2024-07-24 19:06:37.297019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.361 [2024-07-24 19:06:37.297032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.361 [2024-07-24 19:06:37.297042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.361 [2024-07-24 19:06:37.301278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.361 [2024-07-24 19:06:37.310297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.361 [2024-07-24 19:06:37.310807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.361 [2024-07-24 19:06:37.310829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.361 [2024-07-24 19:06:37.310840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.361 [2024-07-24 19:06:37.311106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.361 [2024-07-24 19:06:37.311372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.361 [2024-07-24 19:06:37.311385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.361 [2024-07-24 19:06:37.311394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.362 [2024-07-24 19:06:37.315641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.362 [2024-07-24 19:06:37.324905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.362 [2024-07-24 19:06:37.325495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.362 [2024-07-24 19:06:37.325545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.362 [2024-07-24 19:06:37.325567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.362 [2024-07-24 19:06:37.326123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.362 [2024-07-24 19:06:37.326390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.362 [2024-07-24 19:06:37.326402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.362 [2024-07-24 19:06:37.326412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.362 [2024-07-24 19:06:37.330656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.362 [2024-07-24 19:06:37.339665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.362 [2024-07-24 19:06:37.340256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.362 [2024-07-24 19:06:37.340277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.362 [2024-07-24 19:06:37.340287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.362 [2024-07-24 19:06:37.340551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.362 [2024-07-24 19:06:37.340825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.362 [2024-07-24 19:06:37.340839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.362 [2024-07-24 19:06:37.340848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.362 [2024-07-24 19:06:37.345082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.362 [2024-07-24 19:06:37.354338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.362 [2024-07-24 19:06:37.354932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.362 [2024-07-24 19:06:37.354975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.362 [2024-07-24 19:06:37.354998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.362 [2024-07-24 19:06:37.355576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.362 [2024-07-24 19:06:37.356106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.362 [2024-07-24 19:06:37.356120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.362 [2024-07-24 19:06:37.356130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.362 [2024-07-24 19:06:37.360376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.621 [2024-07-24 19:06:37.368905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.621 [2024-07-24 19:06:37.369405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-07-24 19:06:37.369427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.621 [2024-07-24 19:06:37.369437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.621 [2024-07-24 19:06:37.369708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.621 [2024-07-24 19:06:37.369978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.621 [2024-07-24 19:06:37.369991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.621 [2024-07-24 19:06:37.370001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.621 [2024-07-24 19:06:37.374249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.621 [2024-07-24 19:06:37.383516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.621 [2024-07-24 19:06:37.384104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-07-24 19:06:37.384148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.621 [2024-07-24 19:06:37.384171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.621 [2024-07-24 19:06:37.384765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.621 [2024-07-24 19:06:37.385235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.621 [2024-07-24 19:06:37.385248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.621 [2024-07-24 19:06:37.385257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.621 [2024-07-24 19:06:37.389500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.621 [2024-07-24 19:06:37.398257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.621 [2024-07-24 19:06:37.398807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-07-24 19:06:37.398830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.621 [2024-07-24 19:06:37.398841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.621 [2024-07-24 19:06:37.399105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.621 [2024-07-24 19:06:37.399371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.621 [2024-07-24 19:06:37.399384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.621 [2024-07-24 19:06:37.399393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.621 [2024-07-24 19:06:37.403650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.621 [2024-07-24 19:06:37.412920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.621 [2024-07-24 19:06:37.413434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-07-24 19:06:37.413478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.621 [2024-07-24 19:06:37.413500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.621 [2024-07-24 19:06:37.414100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.621 [2024-07-24 19:06:37.414367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.621 [2024-07-24 19:06:37.414380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.621 [2024-07-24 19:06:37.414389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.621 [2024-07-24 19:06:37.418640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.621 [2024-07-24 19:06:37.427652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.621 [2024-07-24 19:06:37.428239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-07-24 19:06:37.428281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.621 [2024-07-24 19:06:37.428302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.621 [2024-07-24 19:06:37.428900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.429168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.622 [2024-07-24 19:06:37.429181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.622 [2024-07-24 19:06:37.429191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.622 [2024-07-24 19:06:37.433435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.622 [2024-07-24 19:06:37.442195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.622 [2024-07-24 19:06:37.442691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-07-24 19:06:37.442714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.622 [2024-07-24 19:06:37.442724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.622 [2024-07-24 19:06:37.442989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.443254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.622 [2024-07-24 19:06:37.443267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.622 [2024-07-24 19:06:37.443276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.622 [2024-07-24 19:06:37.447515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.622 [2024-07-24 19:06:37.456777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.622 [2024-07-24 19:06:37.457360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-07-24 19:06:37.457403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.622 [2024-07-24 19:06:37.457423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.622 [2024-07-24 19:06:37.458017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.458332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.622 [2024-07-24 19:06:37.458345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.622 [2024-07-24 19:06:37.458354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.622 [2024-07-24 19:06:37.462591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.622 [2024-07-24 19:06:37.471358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.622 [2024-07-24 19:06:37.471879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-07-24 19:06:37.471921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.622 [2024-07-24 19:06:37.471949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.622 [2024-07-24 19:06:37.472478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.472752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.622 [2024-07-24 19:06:37.472766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.622 [2024-07-24 19:06:37.472776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.622 [2024-07-24 19:06:37.477016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.622 [2024-07-24 19:06:37.486051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.622 [2024-07-24 19:06:37.486553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-07-24 19:06:37.486576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.622 [2024-07-24 19:06:37.486587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.622 [2024-07-24 19:06:37.486860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.487126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.622 [2024-07-24 19:06:37.487139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.622 [2024-07-24 19:06:37.487149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.622 [2024-07-24 19:06:37.491513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.622 [2024-07-24 19:06:37.500794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.622 [2024-07-24 19:06:37.501236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-07-24 19:06:37.501258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.622 [2024-07-24 19:06:37.501269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.622 [2024-07-24 19:06:37.501534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.501807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.622 [2024-07-24 19:06:37.501821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.622 [2024-07-24 19:06:37.501831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.622 [2024-07-24 19:06:37.506092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.622 [2024-07-24 19:06:37.515381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.622 [2024-07-24 19:06:37.515941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-07-24 19:06:37.515964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.622 [2024-07-24 19:06:37.515976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.622 [2024-07-24 19:06:37.516240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.516507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.622 [2024-07-24 19:06:37.516524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.622 [2024-07-24 19:06:37.516534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.622 [2024-07-24 19:06:37.520793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.622 [2024-07-24 19:06:37.530082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.622 [2024-07-24 19:06:37.530659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-07-24 19:06:37.530683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.622 [2024-07-24 19:06:37.530694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.622 [2024-07-24 19:06:37.530959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.531224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.622 [2024-07-24 19:06:37.531237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.622 [2024-07-24 19:06:37.531247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.622 [2024-07-24 19:06:37.535501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.622 [2024-07-24 19:06:37.544784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.622 [2024-07-24 19:06:37.545338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-07-24 19:06:37.545360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.622 [2024-07-24 19:06:37.545371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.622 [2024-07-24 19:06:37.545644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.545910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.622 [2024-07-24 19:06:37.545923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.622 [2024-07-24 19:06:37.545934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.622 [2024-07-24 19:06:37.550196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.622 [2024-07-24 19:06:37.559473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.622 [2024-07-24 19:06:37.560060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-07-24 19:06:37.560082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.622 [2024-07-24 19:06:37.560093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.622 [2024-07-24 19:06:37.560357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.560631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.622 [2024-07-24 19:06:37.560645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.622 [2024-07-24 19:06:37.560655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.622 [2024-07-24 19:06:37.564897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.622 [2024-07-24 19:06:37.574185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.622 [2024-07-24 19:06:37.574765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-07-24 19:06:37.574788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.622 [2024-07-24 19:06:37.574799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.622 [2024-07-24 19:06:37.575064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.622 [2024-07-24 19:06:37.575329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.623 [2024-07-24 19:06:37.575342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.623 [2024-07-24 19:06:37.575352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.623 [2024-07-24 19:06:37.579624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.623 [2024-07-24 19:06:37.588912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.623 [2024-07-24 19:06:37.589462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-07-24 19:06:37.589485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.623 [2024-07-24 19:06:37.589496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.623 [2024-07-24 19:06:37.589767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.623 [2024-07-24 19:06:37.590032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.623 [2024-07-24 19:06:37.590045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.623 [2024-07-24 19:06:37.590055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.623 [2024-07-24 19:06:37.594297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.623 [2024-07-24 19:06:37.603578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.623 [2024-07-24 19:06:37.604173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-07-24 19:06:37.604195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.623 [2024-07-24 19:06:37.604206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.623 [2024-07-24 19:06:37.604470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.623 [2024-07-24 19:06:37.604743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.623 [2024-07-24 19:06:37.604757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.623 [2024-07-24 19:06:37.604766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.623 [2024-07-24 19:06:37.609010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.623 [2024-07-24 19:06:37.618268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.623 [2024-07-24 19:06:37.618850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-07-24 19:06:37.618873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.623 [2024-07-24 19:06:37.618883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.623 [2024-07-24 19:06:37.619151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.623 [2024-07-24 19:06:37.619416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.623 [2024-07-24 19:06:37.619429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.623 [2024-07-24 19:06:37.619439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.623 [2024-07-24 19:06:37.623685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.883 [2024-07-24 19:06:37.632965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.883 [2024-07-24 19:06:37.633569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.883 [2024-07-24 19:06:37.633593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.883 [2024-07-24 19:06:37.633611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.883 [2024-07-24 19:06:37.633878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.883 [2024-07-24 19:06:37.634144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.883 [2024-07-24 19:06:37.634157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.883 [2024-07-24 19:06:37.634167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.883 [2024-07-24 19:06:37.638426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.883 [2024-07-24 19:06:37.647729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.883 [2024-07-24 19:06:37.648254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.883 [2024-07-24 19:06:37.648297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.883 [2024-07-24 19:06:37.648320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.883 [2024-07-24 19:06:37.648880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.883 [2024-07-24 19:06:37.649148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.883 [2024-07-24 19:06:37.649161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.883 [2024-07-24 19:06:37.649170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.883 [2024-07-24 19:06:37.653421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.883 [2024-07-24 19:06:37.662459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.883 [2024-07-24 19:06:37.663042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.883 [2024-07-24 19:06:37.663085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.883 [2024-07-24 19:06:37.663107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.883 [2024-07-24 19:06:37.663629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.883 [2024-07-24 19:06:37.663896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.883 [2024-07-24 19:06:37.663909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.883 [2024-07-24 19:06:37.663924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.883 [2024-07-24 19:06:37.668164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.883 [2024-07-24 19:06:37.677204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.883 [2024-07-24 19:06:37.677802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.883 [2024-07-24 19:06:37.677846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.883 [2024-07-24 19:06:37.677868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.883 [2024-07-24 19:06:37.678448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.883 [2024-07-24 19:06:37.678799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.883 [2024-07-24 19:06:37.678813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.883 [2024-07-24 19:06:37.678823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.883 [2024-07-24 19:06:37.683082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.883 [2024-07-24 19:06:37.691863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.883 [2024-07-24 19:06:37.692382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.883 [2024-07-24 19:06:37.692425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.883 [2024-07-24 19:06:37.692447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.883 [2024-07-24 19:06:37.692955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.883 [2024-07-24 19:06:37.693222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.883 [2024-07-24 19:06:37.693235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.883 [2024-07-24 19:06:37.693244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.883 [2024-07-24 19:06:37.697494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.883 [2024-07-24 19:06:37.706539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.883 [2024-07-24 19:06:37.707074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.883 [2024-07-24 19:06:37.707125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.883 [2024-07-24 19:06:37.707147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.883 [2024-07-24 19:06:37.707708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.883 [2024-07-24 19:06:37.707975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.883 [2024-07-24 19:06:37.707988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.883 [2024-07-24 19:06:37.707998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.883 [2024-07-24 19:06:37.712247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.883 [2024-07-24 19:06:37.721277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.883 [2024-07-24 19:06:37.721876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.883 [2024-07-24 19:06:37.721898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.883 [2024-07-24 19:06:37.721908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.883 [2024-07-24 19:06:37.722173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.883 [2024-07-24 19:06:37.722438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.883 [2024-07-24 19:06:37.722451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.883 [2024-07-24 19:06:37.722461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.883 [2024-07-24 19:06:37.726716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.883 [2024-07-24 19:06:37.736003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.883 [2024-07-24 19:06:37.736503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.883 [2024-07-24 19:06:37.736526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.883 [2024-07-24 19:06:37.736536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.883 [2024-07-24 19:06:37.736846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.883 [2024-07-24 19:06:37.737115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.883 [2024-07-24 19:06:37.737128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.883 [2024-07-24 19:06:37.737138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.883 [2024-07-24 19:06:37.741387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.883 [2024-07-24 19:06:37.750676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.883 [2024-07-24 19:06:37.751229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.883 [2024-07-24 19:06:37.751252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.883 [2024-07-24 19:06:37.751262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.883 [2024-07-24 19:06:37.751526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.883 [2024-07-24 19:06:37.751799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.883 [2024-07-24 19:06:37.751819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.883 [2024-07-24 19:06:37.751830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.883 [2024-07-24 19:06:37.756076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.884 [2024-07-24 19:06:37.765360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.884 [2024-07-24 19:06:37.765922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.884 [2024-07-24 19:06:37.765945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.884 [2024-07-24 19:06:37.765955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.884 [2024-07-24 19:06:37.766220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.884 [2024-07-24 19:06:37.766490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.884 [2024-07-24 19:06:37.766503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.884 [2024-07-24 19:06:37.766513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.884 [2024-07-24 19:06:37.770768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.884 [2024-07-24 19:06:37.780045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.884 [2024-07-24 19:06:37.780645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.884 [2024-07-24 19:06:37.780668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.884 [2024-07-24 19:06:37.780678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.884 [2024-07-24 19:06:37.780943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.884 [2024-07-24 19:06:37.781208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.884 [2024-07-24 19:06:37.781221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.884 [2024-07-24 19:06:37.781231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.884 [2024-07-24 19:06:37.785492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.884 [2024-07-24 19:06:37.794775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.884 [2024-07-24 19:06:37.795381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.884 [2024-07-24 19:06:37.795425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.884 [2024-07-24 19:06:37.795447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.884 [2024-07-24 19:06:37.795960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.884 [2024-07-24 19:06:37.796315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.884 [2024-07-24 19:06:37.796334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.884 [2024-07-24 19:06:37.796347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.884 [2024-07-24 19:06:37.802581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.884 [2024-07-24 19:06:37.809774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.884 [2024-07-24 19:06:37.810356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.884 [2024-07-24 19:06:37.810398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.884 [2024-07-24 19:06:37.810420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.884 [2024-07-24 19:06:37.811010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.884 [2024-07-24 19:06:37.811445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.884 [2024-07-24 19:06:37.811459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.884 [2024-07-24 19:06:37.811468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.884 [2024-07-24 19:06:37.815732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.884 [2024-07-24 19:06:37.824494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.884 [2024-07-24 19:06:37.825054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.884 [2024-07-24 19:06:37.825077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.884 [2024-07-24 19:06:37.825087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.884 [2024-07-24 19:06:37.825352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.884 [2024-07-24 19:06:37.825626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.884 [2024-07-24 19:06:37.825640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.884 [2024-07-24 19:06:37.825650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.884 [2024-07-24 19:06:37.829890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.884 [2024-07-24 19:06:37.839173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.884 [2024-07-24 19:06:37.839745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.884 [2024-07-24 19:06:37.839789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.884 [2024-07-24 19:06:37.839811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.884 [2024-07-24 19:06:37.840123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.884 [2024-07-24 19:06:37.840388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.884 [2024-07-24 19:06:37.840401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.884 [2024-07-24 19:06:37.840411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.884 [2024-07-24 19:06:37.844666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.884 [2024-07-24 19:06:37.853935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.884 [2024-07-24 19:06:37.854503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.884 [2024-07-24 19:06:37.854545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.884 [2024-07-24 19:06:37.854568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.884 [2024-07-24 19:06:37.855091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.884 [2024-07-24 19:06:37.855357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.884 [2024-07-24 19:06:37.855370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.884 [2024-07-24 19:06:37.855380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.884 [2024-07-24 19:06:37.859636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.884 [2024-07-24 19:06:37.868706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.884 [2024-07-24 19:06:37.869187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.884 [2024-07-24 19:06:37.869237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.884 [2024-07-24 19:06:37.869260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.884 [2024-07-24 19:06:37.870179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.884 [2024-07-24 19:06:37.870514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.884 [2024-07-24 19:06:37.870527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.884 [2024-07-24 19:06:37.870538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.884 [2024-07-24 19:06:37.874848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.884 [2024-07-24 19:06:37.883376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.884 [2024-07-24 19:06:37.883863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.884 [2024-07-24 19:06:37.883886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:52.884 [2024-07-24 19:06:37.883897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:52.884 [2024-07-24 19:06:37.884162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:52.884 [2024-07-24 19:06:37.884428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.884 [2024-07-24 19:06:37.884442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.884 [2024-07-24 19:06:37.884452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.884 [2024-07-24 19:06:37.888707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.145 [2024-07-24 19:06:37.897987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.145 [2024-07-24 19:06:37.898506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.145 [2024-07-24 19:06:37.898549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.145 [2024-07-24 19:06:37.898572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.145 [2024-07-24 19:06:37.899149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.145 [2024-07-24 19:06:37.899415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.145 [2024-07-24 19:06:37.899428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.145 [2024-07-24 19:06:37.899437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.145 [2024-07-24 19:06:37.903701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.145 [2024-07-24 19:06:37.912733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.145 [2024-07-24 19:06:37.913230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.145 [2024-07-24 19:06:37.913253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.145 [2024-07-24 19:06:37.913263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.145 [2024-07-24 19:06:37.913528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.145 [2024-07-24 19:06:37.913807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.145 [2024-07-24 19:06:37.913821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.145 [2024-07-24 19:06:37.913831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.145 [2024-07-24 19:06:37.918079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.145 [2024-07-24 19:06:37.927362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.145 [2024-07-24 19:06:37.927862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.145 [2024-07-24 19:06:37.927885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.145 [2024-07-24 19:06:37.927896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.145 [2024-07-24 19:06:37.928160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.145 [2024-07-24 19:06:37.928425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.145 [2024-07-24 19:06:37.928438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.145 [2024-07-24 19:06:37.928448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.145 [2024-07-24 19:06:37.932875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.145 [2024-07-24 19:06:37.941907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.145 [2024-07-24 19:06:37.942419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.145 [2024-07-24 19:06:37.942463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.145 [2024-07-24 19:06:37.942486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.145 [2024-07-24 19:06:37.943077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.145 [2024-07-24 19:06:37.943450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.145 [2024-07-24 19:06:37.943463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.145 [2024-07-24 19:06:37.943473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.145 [2024-07-24 19:06:37.947722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.145 [2024-07-24 19:06:37.956500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.145 [2024-07-24 19:06:37.956994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.145 [2024-07-24 19:06:37.957016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.145 [2024-07-24 19:06:37.957027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.145 [2024-07-24 19:06:37.957293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.145 [2024-07-24 19:06:37.957558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.145 [2024-07-24 19:06:37.957571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.145 [2024-07-24 19:06:37.957581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.145 [2024-07-24 19:06:37.961831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.145 [2024-07-24 19:06:37.971123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.145 [2024-07-24 19:06:37.971713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.145 [2024-07-24 19:06:37.971757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.145 [2024-07-24 19:06:37.971779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.145 [2024-07-24 19:06:37.972358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.145 [2024-07-24 19:06:37.972688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.145 [2024-07-24 19:06:37.972702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.145 [2024-07-24 19:06:37.972712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.145 [2024-07-24 19:06:37.976963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.145 [2024-07-24 19:06:37.985758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.145 [2024-07-24 19:06:37.986319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.145 [2024-07-24 19:06:37.986362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.145 [2024-07-24 19:06:37.986385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.145 [2024-07-24 19:06:37.986851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.146 [2024-07-24 19:06:37.987118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.146 [2024-07-24 19:06:37.987131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.146 [2024-07-24 19:06:37.987140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.146 [2024-07-24 19:06:37.991399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.146 [2024-07-24 19:06:38.000437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.146 [2024-07-24 19:06:38.000874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.146 [2024-07-24 19:06:38.000897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.146 [2024-07-24 19:06:38.000908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.146 [2024-07-24 19:06:38.001173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.146 [2024-07-24 19:06:38.001440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.146 [2024-07-24 19:06:38.001453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.146 [2024-07-24 19:06:38.001463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.146 [2024-07-24 19:06:38.005727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.146 [2024-07-24 19:06:38.014995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.146 [2024-07-24 19:06:38.015584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.146 [2024-07-24 19:06:38.015637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.146 [2024-07-24 19:06:38.015668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.146 [2024-07-24 19:06:38.016247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.146 [2024-07-24 19:06:38.016717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.146 [2024-07-24 19:06:38.016731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.146 [2024-07-24 19:06:38.016741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.146 [2024-07-24 19:06:38.020985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.146 [2024-07-24 19:06:38.029750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.146 [2024-07-24 19:06:38.030343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.146 [2024-07-24 19:06:38.030387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.146 [2024-07-24 19:06:38.030397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.146 [2024-07-24 19:06:38.030669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.146 [2024-07-24 19:06:38.030936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.146 [2024-07-24 19:06:38.030949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.146 [2024-07-24 19:06:38.030959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.146 [2024-07-24 19:06:38.035209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.146 [2024-07-24 19:06:38.044477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.146 [2024-07-24 19:06:38.045060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.146 [2024-07-24 19:06:38.045082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.146 [2024-07-24 19:06:38.045092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.146 [2024-07-24 19:06:38.045356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.146 [2024-07-24 19:06:38.045629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.146 [2024-07-24 19:06:38.045643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.146 [2024-07-24 19:06:38.045653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.146 [2024-07-24 19:06:38.049892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.146 [2024-07-24 19:06:38.059153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.146 [2024-07-24 19:06:38.059742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.146 [2024-07-24 19:06:38.059786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.146 [2024-07-24 19:06:38.059807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.146 [2024-07-24 19:06:38.060386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.146 [2024-07-24 19:06:38.060863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.146 [2024-07-24 19:06:38.060881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.146 [2024-07-24 19:06:38.060891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.146 [2024-07-24 19:06:38.065139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.146 [2024-07-24 19:06:38.073904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.146 [2024-07-24 19:06:38.074488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.146 [2024-07-24 19:06:38.074530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.146 [2024-07-24 19:06:38.074550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.146 [2024-07-24 19:06:38.075077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.146 [2024-07-24 19:06:38.075344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.146 [2024-07-24 19:06:38.075357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.146 [2024-07-24 19:06:38.075367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.146 [2024-07-24 19:06:38.079615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.146 [2024-07-24 19:06:38.088644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.146 [2024-07-24 19:06:38.089194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.146 [2024-07-24 19:06:38.089216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.146 [2024-07-24 19:06:38.089226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.146 [2024-07-24 19:06:38.089489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.146 [2024-07-24 19:06:38.089761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.146 [2024-07-24 19:06:38.089775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.146 [2024-07-24 19:06:38.089784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.146 [2024-07-24 19:06:38.094021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.146 [2024-07-24 19:06:38.103285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.146 [2024-07-24 19:06:38.103796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.146 [2024-07-24 19:06:38.103839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.146 [2024-07-24 19:06:38.103860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.146 [2024-07-24 19:06:38.104439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.146 [2024-07-24 19:06:38.104939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.146 [2024-07-24 19:06:38.104953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.146 [2024-07-24 19:06:38.104963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.146 [2024-07-24 19:06:38.109201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.146 [2024-07-24 19:06:38.117965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.146 [2024-07-24 19:06:38.118455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.146 [2024-07-24 19:06:38.118497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.146 [2024-07-24 19:06:38.118520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.146 [2024-07-24 19:06:38.119105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.146 [2024-07-24 19:06:38.119371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.146 [2024-07-24 19:06:38.119384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.146 [2024-07-24 19:06:38.119393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.146 [2024-07-24 19:06:38.123636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.146 [2024-07-24 19:06:38.132633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.146 [2024-07-24 19:06:38.133210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.146 [2024-07-24 19:06:38.133231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.147 [2024-07-24 19:06:38.133242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.147 [2024-07-24 19:06:38.133505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.147 [2024-07-24 19:06:38.133776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.147 [2024-07-24 19:06:38.133790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.147 [2024-07-24 19:06:38.133800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.147 [2024-07-24 19:06:38.138039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.147 [2024-07-24 19:06:38.147292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.147 [2024-07-24 19:06:38.147869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.147 [2024-07-24 19:06:38.147892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.147 [2024-07-24 19:06:38.147902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.147 [2024-07-24 19:06:38.148165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.147 [2024-07-24 19:06:38.148431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.147 [2024-07-24 19:06:38.148444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.147 [2024-07-24 19:06:38.148453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.407 [2024-07-24 19:06:38.152707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.407 [2024-07-24 19:06:38.161968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.407 [2024-07-24 19:06:38.162552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.407 [2024-07-24 19:06:38.162574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.407 [2024-07-24 19:06:38.162584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.407 [2024-07-24 19:06:38.162861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.407 [2024-07-24 19:06:38.163128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.407 [2024-07-24 19:06:38.163141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.407 [2024-07-24 19:06:38.163150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.407 [2024-07-24 19:06:38.167385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.407 [2024-07-24 19:06:38.176664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.407 [2024-07-24 19:06:38.177249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.407 [2024-07-24 19:06:38.177292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.407 [2024-07-24 19:06:38.177315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.407 [2024-07-24 19:06:38.177858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.407 [2024-07-24 19:06:38.178127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.407 [2024-07-24 19:06:38.178140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.407 [2024-07-24 19:06:38.178150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.407 [2024-07-24 19:06:38.182403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.407 [2024-07-24 19:06:38.191439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.407 [2024-07-24 19:06:38.192000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.407 [2024-07-24 19:06:38.192023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.407 [2024-07-24 19:06:38.192034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.407 [2024-07-24 19:06:38.192299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.407 [2024-07-24 19:06:38.192565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.407 [2024-07-24 19:06:38.192578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.407 [2024-07-24 19:06:38.192587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.407 [2024-07-24 19:06:38.196840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.407 [2024-07-24 19:06:38.206123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.407 [2024-07-24 19:06:38.206687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.407 [2024-07-24 19:06:38.206731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.407 [2024-07-24 19:06:38.206753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.407 [2024-07-24 19:06:38.207234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.407 [2024-07-24 19:06:38.207502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.407 [2024-07-24 19:06:38.207515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.407 [2024-07-24 19:06:38.207530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.407 [2024-07-24 19:06:38.211784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.407 [2024-07-24 19:06:38.220798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.407 [2024-07-24 19:06:38.221298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.407 [2024-07-24 19:06:38.221319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.407 [2024-07-24 19:06:38.221330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.407 [2024-07-24 19:06:38.221595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.407 [2024-07-24 19:06:38.221870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.407 [2024-07-24 19:06:38.221883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.407 [2024-07-24 19:06:38.221895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.407 [2024-07-24 19:06:38.226147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.407 [2024-07-24 19:06:38.235427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.407 [2024-07-24 19:06:38.236023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.407 [2024-07-24 19:06:38.236068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.407 [2024-07-24 19:06:38.236090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.236681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.237179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.408 [2024-07-24 19:06:38.237192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.408 [2024-07-24 19:06:38.237201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.408 [2024-07-24 19:06:38.241461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.408 [2024-07-24 19:06:38.250019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.408 [2024-07-24 19:06:38.250607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.408 [2024-07-24 19:06:38.250631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.408 [2024-07-24 19:06:38.250642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.250908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.251174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.408 [2024-07-24 19:06:38.251187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.408 [2024-07-24 19:06:38.251197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.408 [2024-07-24 19:06:38.255445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.408 [2024-07-24 19:06:38.264710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.408 [2024-07-24 19:06:38.265294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.408 [2024-07-24 19:06:38.265315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.408 [2024-07-24 19:06:38.265325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.265590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.265865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.408 [2024-07-24 19:06:38.265879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.408 [2024-07-24 19:06:38.265888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.408 [2024-07-24 19:06:38.270137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.408 [2024-07-24 19:06:38.279407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.408 [2024-07-24 19:06:38.279971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.408 [2024-07-24 19:06:38.279994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.408 [2024-07-24 19:06:38.280005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.280270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.280535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.408 [2024-07-24 19:06:38.280548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.408 [2024-07-24 19:06:38.280558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.408 [2024-07-24 19:06:38.284813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.408 [2024-07-24 19:06:38.294083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.408 [2024-07-24 19:06:38.294662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.408 [2024-07-24 19:06:38.294686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.408 [2024-07-24 19:06:38.294697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.294962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.295227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.408 [2024-07-24 19:06:38.295240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.408 [2024-07-24 19:06:38.295250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.408 [2024-07-24 19:06:38.299502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.408 [2024-07-24 19:06:38.308787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.408 [2024-07-24 19:06:38.309372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.408 [2024-07-24 19:06:38.309415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.408 [2024-07-24 19:06:38.309438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.310033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.310346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.408 [2024-07-24 19:06:38.310360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.408 [2024-07-24 19:06:38.310369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.408 [2024-07-24 19:06:38.314616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.408 [2024-07-24 19:06:38.323371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.408 [2024-07-24 19:06:38.323969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.408 [2024-07-24 19:06:38.324013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.408 [2024-07-24 19:06:38.324036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.324543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.324815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.408 [2024-07-24 19:06:38.324829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.408 [2024-07-24 19:06:38.324840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.408 [2024-07-24 19:06:38.329083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.408 [2024-07-24 19:06:38.338096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.408 [2024-07-24 19:06:38.338653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.408 [2024-07-24 19:06:38.338675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.408 [2024-07-24 19:06:38.338686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.338951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.339217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.408 [2024-07-24 19:06:38.339230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.408 [2024-07-24 19:06:38.339240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.408 [2024-07-24 19:06:38.343485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.408 [2024-07-24 19:06:38.352750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.408 [2024-07-24 19:06:38.353248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.408 [2024-07-24 19:06:38.353271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.408 [2024-07-24 19:06:38.353282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.353547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.353820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.408 [2024-07-24 19:06:38.353834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.408 [2024-07-24 19:06:38.353843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.408 [2024-07-24 19:06:38.358088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.408 [2024-07-24 19:06:38.367362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.408 [2024-07-24 19:06:38.367943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.408 [2024-07-24 19:06:38.367965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.408 [2024-07-24 19:06:38.367976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.368240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.368505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.408 [2024-07-24 19:06:38.368518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.408 [2024-07-24 19:06:38.368527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.408 [2024-07-24 19:06:38.372771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.408 [2024-07-24 19:06:38.382038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.408 [2024-07-24 19:06:38.382633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.408 [2024-07-24 19:06:38.382676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.408 [2024-07-24 19:06:38.382698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.408 [2024-07-24 19:06:38.383276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.408 [2024-07-24 19:06:38.383636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.409 [2024-07-24 19:06:38.383650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.409 [2024-07-24 19:06:38.383659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.409 [2024-07-24 19:06:38.387911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.409 [2024-07-24 19:06:38.396671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.409 [2024-07-24 19:06:38.397156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.409 [2024-07-24 19:06:38.397212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.409 [2024-07-24 19:06:38.397224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.409 [2024-07-24 19:06:38.397488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.409 [2024-07-24 19:06:38.397761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.409 [2024-07-24 19:06:38.397775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.409 [2024-07-24 19:06:38.397785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.409 [2024-07-24 19:06:38.402033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.409 [2024-07-24 19:06:38.411298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.409 [2024-07-24 19:06:38.411860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.409 [2024-07-24 19:06:38.411883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.409 [2024-07-24 19:06:38.411897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.409 [2024-07-24 19:06:38.412161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.409 [2024-07-24 19:06:38.412426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.409 [2024-07-24 19:06:38.412439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.409 [2024-07-24 19:06:38.412448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.669 [2024-07-24 19:06:38.416701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.669 [2024-07-24 19:06:38.425969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.669 [2024-07-24 19:06:38.426565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-24 19:06:38.426619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.669 [2024-07-24 19:06:38.426643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.669 [2024-07-24 19:06:38.427222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.669 [2024-07-24 19:06:38.427748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.669 [2024-07-24 19:06:38.427762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.669 [2024-07-24 19:06:38.427773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.669 [2024-07-24 19:06:38.432012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.669 [2024-07-24 19:06:38.440525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.669 [2024-07-24 19:06:38.441114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-24 19:06:38.441157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.669 [2024-07-24 19:06:38.441179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.669 [2024-07-24 19:06:38.441740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.669 [2024-07-24 19:06:38.442007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.669 [2024-07-24 19:06:38.442019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.669 [2024-07-24 19:06:38.442029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.669 [2024-07-24 19:06:38.446270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.669 [2024-07-24 19:06:38.455288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.669 [2024-07-24 19:06:38.455884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-24 19:06:38.455929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.669 [2024-07-24 19:06:38.455951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.669 [2024-07-24 19:06:38.456460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.669 [2024-07-24 19:06:38.456736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.669 [2024-07-24 19:06:38.456750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.669 [2024-07-24 19:06:38.456760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.669 [2024-07-24 19:06:38.461001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.669 [2024-07-24 19:06:38.470009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.669 [2024-07-24 19:06:38.470592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-24 19:06:38.470619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.669 [2024-07-24 19:06:38.470631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.669 [2024-07-24 19:06:38.470897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.669 [2024-07-24 19:06:38.471162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.669 [2024-07-24 19:06:38.471175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.669 [2024-07-24 19:06:38.471186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.669 [2024-07-24 19:06:38.475432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.669 [2024-07-24 19:06:38.484723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.669 [2024-07-24 19:06:38.485314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.669 [2024-07-24 19:06:38.485357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.669 [2024-07-24 19:06:38.485379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.669 [2024-07-24 19:06:38.485906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.669 [2024-07-24 19:06:38.486173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.486186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.670 [2024-07-24 19:06:38.486195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.670 [2024-07-24 19:06:38.490440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.670 [2024-07-24 19:06:38.499469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.670 [2024-07-24 19:06:38.500061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-24 19:06:38.500104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.670 [2024-07-24 19:06:38.500126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.670 [2024-07-24 19:06:38.500718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.670 [2024-07-24 19:06:38.501002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.501015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.670 [2024-07-24 19:06:38.501024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.670 [2024-07-24 19:06:38.505303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.670 [2024-07-24 19:06:38.514195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.670 [2024-07-24 19:06:38.514762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-24 19:06:38.514808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.670 [2024-07-24 19:06:38.514831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.670 [2024-07-24 19:06:38.515411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.670 [2024-07-24 19:06:38.515981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.516001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.670 [2024-07-24 19:06:38.516014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.670 [2024-07-24 19:06:38.522253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.670 [2024-07-24 19:06:38.529426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.670 [2024-07-24 19:06:38.530016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-24 19:06:38.530060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.670 [2024-07-24 19:06:38.530083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.670 [2024-07-24 19:06:38.530674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.670 [2024-07-24 19:06:38.531093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.531106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.670 [2024-07-24 19:06:38.531115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.670 [2024-07-24 19:06:38.535360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.670 [2024-07-24 19:06:38.544123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.670 [2024-07-24 19:06:38.544615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-24 19:06:38.544658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.670 [2024-07-24 19:06:38.544680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.670 [2024-07-24 19:06:38.545166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.670 [2024-07-24 19:06:38.545432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.545445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.670 [2024-07-24 19:06:38.545455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.670 [2024-07-24 19:06:38.549696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.670 [2024-07-24 19:06:38.558703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.670 [2024-07-24 19:06:38.559272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-24 19:06:38.559315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.670 [2024-07-24 19:06:38.559349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.670 [2024-07-24 19:06:38.559920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.670 [2024-07-24 19:06:38.560186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.560199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.670 [2024-07-24 19:06:38.560209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.670 [2024-07-24 19:06:38.564449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.670 [2024-07-24 19:06:38.573467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.670 [2024-07-24 19:06:38.574061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-24 19:06:38.574103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.670 [2024-07-24 19:06:38.574126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.670 [2024-07-24 19:06:38.574720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.670 [2024-07-24 19:06:38.575028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.575041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.670 [2024-07-24 19:06:38.575050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.670 [2024-07-24 19:06:38.579297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.670 [2024-07-24 19:06:38.588071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.670 [2024-07-24 19:06:38.588666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-24 19:06:38.588709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.670 [2024-07-24 19:06:38.588732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.670 [2024-07-24 19:06:38.589245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.670 [2024-07-24 19:06:38.589512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.589525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.670 [2024-07-24 19:06:38.589534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.670 [2024-07-24 19:06:38.593787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.670 [2024-07-24 19:06:38.602810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.670 [2024-07-24 19:06:38.603320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-24 19:06:38.603363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.670 [2024-07-24 19:06:38.603385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.670 [2024-07-24 19:06:38.603911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.670 [2024-07-24 19:06:38.604177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.604194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.670 [2024-07-24 19:06:38.604204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.670 [2024-07-24 19:06:38.608460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.670 [2024-07-24 19:06:38.617484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.670 [2024-07-24 19:06:38.618044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-24 19:06:38.618067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.670 [2024-07-24 19:06:38.618077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.670 [2024-07-24 19:06:38.618343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.670 [2024-07-24 19:06:38.618616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.618629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.670 [2024-07-24 19:06:38.618640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.670 [2024-07-24 19:06:38.622882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.670 [2024-07-24 19:06:38.632150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.670 [2024-07-24 19:06:38.632737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.670 [2024-07-24 19:06:38.632760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.670 [2024-07-24 19:06:38.632771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.670 [2024-07-24 19:06:38.633036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.670 [2024-07-24 19:06:38.633302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.670 [2024-07-24 19:06:38.633315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.671 [2024-07-24 19:06:38.633325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.671 [2024-07-24 19:06:38.637568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.671 [2024-07-24 19:06:38.646842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.671 [2024-07-24 19:06:38.647429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-24 19:06:38.647451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.671 [2024-07-24 19:06:38.647462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.671 [2024-07-24 19:06:38.647733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.671 [2024-07-24 19:06:38.648000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.671 [2024-07-24 19:06:38.648012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.671 [2024-07-24 19:06:38.648022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.671 [2024-07-24 19:06:38.652261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.671 [2024-07-24 19:06:38.661524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.671 [2024-07-24 19:06:38.662123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.671 [2024-07-24 19:06:38.662166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.671 [2024-07-24 19:06:38.662189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.671 [2024-07-24 19:06:38.662783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.671 [2024-07-24 19:06:38.663150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.671 [2024-07-24 19:06:38.663164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.671 [2024-07-24 19:06:38.663174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.671 [2024-07-24 19:06:38.667419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.671 [2024-07-24 19:06:38.676185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.671 [2024-07-24 19:06:38.676739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.931 [2024-07-24 19:06:38.676761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.931 [2024-07-24 19:06:38.676772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.931 [2024-07-24 19:06:38.677037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.931 [2024-07-24 19:06:38.677303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.931 [2024-07-24 19:06:38.677316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.931 [2024-07-24 19:06:38.677326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.931 [2024-07-24 19:06:38.681575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.931 [2024-07-24 19:06:38.690870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.931 [2024-07-24 19:06:38.691443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.931 [2024-07-24 19:06:38.691485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.931 [2024-07-24 19:06:38.691508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.931 [2024-07-24 19:06:38.692076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.931 [2024-07-24 19:06:38.692342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.931 [2024-07-24 19:06:38.692355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.931 [2024-07-24 19:06:38.692364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.931 [2024-07-24 19:06:38.696613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.931 [2024-07-24 19:06:38.705648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.931 [2024-07-24 19:06:38.706121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.931 [2024-07-24 19:06:38.706143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.931 [2024-07-24 19:06:38.706154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.931 [2024-07-24 19:06:38.706423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.931 [2024-07-24 19:06:38.706694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.931 [2024-07-24 19:06:38.706708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.931 [2024-07-24 19:06:38.706718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.931 [2024-07-24 19:06:38.710954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.931 [2024-07-24 19:06:38.720228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.931 [2024-07-24 19:06:38.720805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.931 [2024-07-24 19:06:38.720828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.931 [2024-07-24 19:06:38.720839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.931 [2024-07-24 19:06:38.721104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.931 [2024-07-24 19:06:38.721369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.931 [2024-07-24 19:06:38.721382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.931 [2024-07-24 19:06:38.721392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.931 [2024-07-24 19:06:38.725642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.931 [2024-07-24 19:06:38.734904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.931 [2024-07-24 19:06:38.735487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.931 [2024-07-24 19:06:38.735530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.931 [2024-07-24 19:06:38.735552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.931 [2024-07-24 19:06:38.736123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.931 [2024-07-24 19:06:38.736389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.931 [2024-07-24 19:06:38.736402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.931 [2024-07-24 19:06:38.736412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.931 [2024-07-24 19:06:38.740660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.931 [2024-07-24 19:06:38.749454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.931 [2024-07-24 19:06:38.749935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.931 [2024-07-24 19:06:38.749958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.931 [2024-07-24 19:06:38.749969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.932 [2024-07-24 19:06:38.750234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.932 [2024-07-24 19:06:38.750499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.932 [2024-07-24 19:06:38.750511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.932 [2024-07-24 19:06:38.750525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.932 [2024-07-24 19:06:38.754776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.932 [2024-07-24 19:06:38.764054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.932 [2024-07-24 19:06:38.764636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-07-24 19:06:38.764681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.932 [2024-07-24 19:06:38.764704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.932 [2024-07-24 19:06:38.765213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.932 [2024-07-24 19:06:38.765478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.932 [2024-07-24 19:06:38.765492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.932 [2024-07-24 19:06:38.765502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.932 [2024-07-24 19:06:38.769750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.932 [2024-07-24 19:06:38.778769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.932 [2024-07-24 19:06:38.779240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-07-24 19:06:38.779262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.932 [2024-07-24 19:06:38.779272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.932 [2024-07-24 19:06:38.779537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.932 [2024-07-24 19:06:38.779809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.932 [2024-07-24 19:06:38.779823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.932 [2024-07-24 19:06:38.779833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.932 [2024-07-24 19:06:38.784075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.932 [2024-07-24 19:06:38.793348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.932 [2024-07-24 19:06:38.793941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-07-24 19:06:38.793988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.932 [2024-07-24 19:06:38.794011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.932 [2024-07-24 19:06:38.794584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.932 [2024-07-24 19:06:38.794856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.932 [2024-07-24 19:06:38.794870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.932 [2024-07-24 19:06:38.794880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.932 [2024-07-24 19:06:38.799117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.932 [2024-07-24 19:06:38.808146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.932 [2024-07-24 19:06:38.808731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-07-24 19:06:38.808782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.932 [2024-07-24 19:06:38.808805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.932 [2024-07-24 19:06:38.809383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.932 [2024-07-24 19:06:38.809672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.932 [2024-07-24 19:06:38.809686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.932 [2024-07-24 19:06:38.809697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.932 [2024-07-24 19:06:38.813932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.932 [2024-07-24 19:06:38.822701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.932 [2024-07-24 19:06:38.823261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-07-24 19:06:38.823284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.932 [2024-07-24 19:06:38.823295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.932 [2024-07-24 19:06:38.823559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.932 [2024-07-24 19:06:38.823832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.932 [2024-07-24 19:06:38.823846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.932 [2024-07-24 19:06:38.823856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.932 [2024-07-24 19:06:38.828101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.932 [2024-07-24 19:06:38.837367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.932 [2024-07-24 19:06:38.837933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-07-24 19:06:38.837955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.932 [2024-07-24 19:06:38.837965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.932 [2024-07-24 19:06:38.838230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.932 [2024-07-24 19:06:38.838495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.932 [2024-07-24 19:06:38.838508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.932 [2024-07-24 19:06:38.838518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.932 [2024-07-24 19:06:38.842766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.932 [2024-07-24 19:06:38.852030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.932 [2024-07-24 19:06:38.852583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-07-24 19:06:38.852610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.932 [2024-07-24 19:06:38.852621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.932 [2024-07-24 19:06:38.852885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.932 [2024-07-24 19:06:38.853154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.932 [2024-07-24 19:06:38.853167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.932 [2024-07-24 19:06:38.853177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.932 [2024-07-24 19:06:38.857428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.932 [2024-07-24 19:06:38.866702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.932 [2024-07-24 19:06:38.867282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-07-24 19:06:38.867304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.932 [2024-07-24 19:06:38.867315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.932 [2024-07-24 19:06:38.867579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.932 [2024-07-24 19:06:38.867853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.932 [2024-07-24 19:06:38.867867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.932 [2024-07-24 19:06:38.867877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.932 [2024-07-24 19:06:38.872333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.932 [2024-07-24 19:06:38.881347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.932 [2024-07-24 19:06:38.881943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-07-24 19:06:38.881988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.932 [2024-07-24 19:06:38.882010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.932 [2024-07-24 19:06:38.882511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.932 [2024-07-24 19:06:38.882784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.932 [2024-07-24 19:06:38.882798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.932 [2024-07-24 19:06:38.882807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.932 [2024-07-24 19:06:38.887060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.932 [2024-07-24 19:06:38.896080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.932 [2024-07-24 19:06:38.896668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-07-24 19:06:38.896711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.932 [2024-07-24 19:06:38.896733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.933 [2024-07-24 19:06:38.897312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.933 [2024-07-24 19:06:38.897747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.933 [2024-07-24 19:06:38.897762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.933 [2024-07-24 19:06:38.897772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.933 [2024-07-24 19:06:38.902017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.933 [2024-07-24 19:06:38.910788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.933 [2024-07-24 19:06:38.911347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-07-24 19:06:38.911370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.933 [2024-07-24 19:06:38.911381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.933 [2024-07-24 19:06:38.911652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.933 [2024-07-24 19:06:38.911918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.933 [2024-07-24 19:06:38.911931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.933 [2024-07-24 19:06:38.911941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.933 [2024-07-24 19:06:38.916183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.933 [2024-07-24 19:06:38.925450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.933 [2024-07-24 19:06:38.926045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-07-24 19:06:38.926088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:53.933 [2024-07-24 19:06:38.926110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:53.933 [2024-07-24 19:06:38.926701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:53.933 [2024-07-24 19:06:38.927144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.933 [2024-07-24 19:06:38.927157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.933 [2024-07-24 19:06:38.927167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.933 [2024-07-24 19:06:38.931412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.193 [2024-07-24 19:06:38.940172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.193 [2024-07-24 19:06:38.940726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-07-24 19:06:38.940768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.193 [2024-07-24 19:06:38.940792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.193 [2024-07-24 19:06:38.941371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.193 [2024-07-24 19:06:38.941962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.193 [2024-07-24 19:06:38.941989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.193 [2024-07-24 19:06:38.942009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.193 [2024-07-24 19:06:38.946332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.193 [2024-07-24 19:06:38.955015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.193 [2024-07-24 19:06:38.955497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-07-24 19:06:38.955540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.193 [2024-07-24 19:06:38.955569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.193 [2024-07-24 19:06:38.956063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.193 [2024-07-24 19:06:38.956331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.193 [2024-07-24 19:06:38.956343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.193 [2024-07-24 19:06:38.956352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.193 [2024-07-24 19:06:38.960596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.193 [2024-07-24 19:06:38.969612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.193 [2024-07-24 19:06:38.970172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-07-24 19:06:38.970216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.193 [2024-07-24 19:06:38.970237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.193 [2024-07-24 19:06:38.970772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.193 [2024-07-24 19:06:38.971039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.193 [2024-07-24 19:06:38.971052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.193 [2024-07-24 19:06:38.971062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.193 [2024-07-24 19:06:38.975303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.193 [2024-07-24 19:06:38.984316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.193 [2024-07-24 19:06:38.984909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-07-24 19:06:38.984952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.193 [2024-07-24 19:06:38.984975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.193 [2024-07-24 19:06:38.985552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.194 [2024-07-24 19:06:38.986102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.194 [2024-07-24 19:06:38.986116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.194 [2024-07-24 19:06:38.986125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.194 [2024-07-24 19:06:38.990377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.194 [2024-07-24 19:06:38.998895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.194 [2024-07-24 19:06:38.999474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-07-24 19:06:38.999496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.194 [2024-07-24 19:06:38.999508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.194 [2024-07-24 19:06:38.999780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.194 [2024-07-24 19:06:39.000054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.194 [2024-07-24 19:06:39.000071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.194 [2024-07-24 19:06:39.000081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.194 [2024-07-24 19:06:39.004334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.194 [2024-07-24 19:06:39.013636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.194 [2024-07-24 19:06:39.014226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-07-24 19:06:39.014269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.194 [2024-07-24 19:06:39.014292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.194 [2024-07-24 19:06:39.014879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.194 [2024-07-24 19:06:39.015145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.194 [2024-07-24 19:06:39.015158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.194 [2024-07-24 19:06:39.015168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.194 [2024-07-24 19:06:39.019408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.194 [2024-07-24 19:06:39.028189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.194 [2024-07-24 19:06:39.028672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-07-24 19:06:39.028717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.194 [2024-07-24 19:06:39.028740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.194 [2024-07-24 19:06:39.029087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.194 [2024-07-24 19:06:39.029354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.194 [2024-07-24 19:06:39.029366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.194 [2024-07-24 19:06:39.029376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.194 [2024-07-24 19:06:39.033624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.194 [2024-07-24 19:06:39.042899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.194 [2024-07-24 19:06:39.043407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-07-24 19:06:39.043429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.194 [2024-07-24 19:06:39.043440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.194 [2024-07-24 19:06:39.043711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.194 [2024-07-24 19:06:39.043977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.194 [2024-07-24 19:06:39.043990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.194 [2024-07-24 19:06:39.043999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.194 [2024-07-24 19:06:39.048246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.194 [2024-07-24 19:06:39.057526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.194 [2024-07-24 19:06:39.058043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-07-24 19:06:39.058086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.194 [2024-07-24 19:06:39.058109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.194 [2024-07-24 19:06:39.058699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.194 [2024-07-24 19:06:39.059281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.194 [2024-07-24 19:06:39.059314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.194 [2024-07-24 19:06:39.059325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.194 [2024-07-24 19:06:39.063569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.194 [2024-07-24 19:06:39.072110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.194 [2024-07-24 19:06:39.073364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-07-24 19:06:39.073395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.194 [2024-07-24 19:06:39.073407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.194 [2024-07-24 19:06:39.073689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.194 [2024-07-24 19:06:39.073958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.194 [2024-07-24 19:06:39.073971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.194 [2024-07-24 19:06:39.073982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.194 [2024-07-24 19:06:39.078233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.194 [2024-07-24 19:06:39.086771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.194 [2024-07-24 19:06:39.087305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-07-24 19:06:39.087350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.194 [2024-07-24 19:06:39.087373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.194 [2024-07-24 19:06:39.088009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.194 [2024-07-24 19:06:39.088277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.194 [2024-07-24 19:06:39.088291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.194 [2024-07-24 19:06:39.088301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.194 [2024-07-24 19:06:39.092543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.194 [2024-07-24 19:06:39.101321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.194 [2024-07-24 19:06:39.101804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-07-24 19:06:39.101828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.194 [2024-07-24 19:06:39.101839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.194 [2024-07-24 19:06:39.102108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.194 [2024-07-24 19:06:39.102374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.194 [2024-07-24 19:06:39.102387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.194 [2024-07-24 19:06:39.102396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.194 [2024-07-24 19:06:39.106660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.195 [2024-07-24 19:06:39.115943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.195 [2024-07-24 19:06:39.116431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-07-24 19:06:39.116474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.195 [2024-07-24 19:06:39.116496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.195 [2024-07-24 19:06:39.117091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.195 [2024-07-24 19:06:39.117386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.195 [2024-07-24 19:06:39.117399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.195 [2024-07-24 19:06:39.117409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.195 [2024-07-24 19:06:39.121661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.195 [2024-07-24 19:06:39.130682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.195 [2024-07-24 19:06:39.131159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-07-24 19:06:39.131181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.195 [2024-07-24 19:06:39.131192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.195 [2024-07-24 19:06:39.131458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.195 [2024-07-24 19:06:39.131730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.195 [2024-07-24 19:06:39.131743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.195 [2024-07-24 19:06:39.131754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.195 [2024-07-24 19:06:39.136005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.195 [2024-07-24 19:06:39.145287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.195 [2024-07-24 19:06:39.145774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-07-24 19:06:39.145798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.195 [2024-07-24 19:06:39.145809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.195 [2024-07-24 19:06:39.146075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.195 [2024-07-24 19:06:39.146341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.195 [2024-07-24 19:06:39.146354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.195 [2024-07-24 19:06:39.146367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.195 [2024-07-24 19:06:39.150623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.195 [2024-07-24 19:06:39.159895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.195 [2024-07-24 19:06:39.160360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-07-24 19:06:39.160382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.195 [2024-07-24 19:06:39.160393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.195 [2024-07-24 19:06:39.160664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.195 [2024-07-24 19:06:39.160930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.195 [2024-07-24 19:06:39.160943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.195 [2024-07-24 19:06:39.160952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.195 [2024-07-24 19:06:39.165203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.195 [2024-07-24 19:06:39.174487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.195 [2024-07-24 19:06:39.175075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-07-24 19:06:39.175098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.195 [2024-07-24 19:06:39.175109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.195 [2024-07-24 19:06:39.175373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.195 [2024-07-24 19:06:39.175645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.195 [2024-07-24 19:06:39.175658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.195 [2024-07-24 19:06:39.175669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.195 [2024-07-24 19:06:39.179919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.195 [2024-07-24 19:06:39.189192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.195 [2024-07-24 19:06:39.189689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-07-24 19:06:39.189732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.195 [2024-07-24 19:06:39.189755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.195 [2024-07-24 19:06:39.190333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.195 [2024-07-24 19:06:39.190690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.195 [2024-07-24 19:06:39.190704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.195 [2024-07-24 19:06:39.190714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.195 [2024-07-24 19:06:39.194963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.455 [2024-07-24 19:06:39.203736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.455 [2024-07-24 19:06:39.204168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.455 [2024-07-24 19:06:39.204190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.456 [2024-07-24 19:06:39.204200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.456 [2024-07-24 19:06:39.204465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.456 [2024-07-24 19:06:39.204745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.456 [2024-07-24 19:06:39.204759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.456 [2024-07-24 19:06:39.204769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.456 [2024-07-24 19:06:39.209017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.456 [2024-07-24 19:06:39.218301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.456 [2024-07-24 19:06:39.218819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.456 [2024-07-24 19:06:39.218842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.456 [2024-07-24 19:06:39.218852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.456 [2024-07-24 19:06:39.219116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.456 [2024-07-24 19:06:39.219381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.456 [2024-07-24 19:06:39.219394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.456 [2024-07-24 19:06:39.219404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.456 [2024-07-24 19:06:39.223659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.456 [2024-07-24 19:06:39.232942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.456 [2024-07-24 19:06:39.233497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.456 [2024-07-24 19:06:39.233519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.456 [2024-07-24 19:06:39.233530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.456 [2024-07-24 19:06:39.233800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.456 [2024-07-24 19:06:39.234067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.456 [2024-07-24 19:06:39.234080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.456 [2024-07-24 19:06:39.234090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.456 [2024-07-24 19:06:39.238338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.456 [2024-07-24 19:06:39.247615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.456 [2024-07-24 19:06:39.248175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.456 [2024-07-24 19:06:39.248197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.456 [2024-07-24 19:06:39.248208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.456 [2024-07-24 19:06:39.248476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.456 [2024-07-24 19:06:39.248747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.456 [2024-07-24 19:06:39.248760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.456 [2024-07-24 19:06:39.248770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.456 [2024-07-24 19:06:39.253016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.456 [2024-07-24 19:06:39.262310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.456 [2024-07-24 19:06:39.262876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.456 [2024-07-24 19:06:39.262900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.456 [2024-07-24 19:06:39.262911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.456 [2024-07-24 19:06:39.263176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.456 [2024-07-24 19:06:39.263443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.456 [2024-07-24 19:06:39.263455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.456 [2024-07-24 19:06:39.263465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.456 [2024-07-24 19:06:39.267716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.456 [2024-07-24 19:06:39.276987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.456 [2024-07-24 19:06:39.277474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.456 [2024-07-24 19:06:39.277497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.456 [2024-07-24 19:06:39.277507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.456 [2024-07-24 19:06:39.277777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.456 [2024-07-24 19:06:39.278043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.456 [2024-07-24 19:06:39.278057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.456 [2024-07-24 19:06:39.278066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.456 [2024-07-24 19:06:39.282307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.456 [2024-07-24 19:06:39.291588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.456 [2024-07-24 19:06:39.292057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.456 [2024-07-24 19:06:39.292079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.456 [2024-07-24 19:06:39.292089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.456 [2024-07-24 19:06:39.292353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.456 [2024-07-24 19:06:39.292625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.456 [2024-07-24 19:06:39.292639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.456 [2024-07-24 19:06:39.292652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.456 [2024-07-24 19:06:39.296902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.456 [2024-07-24 19:06:39.306185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.456 [2024-07-24 19:06:39.306766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.456 [2024-07-24 19:06:39.306789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.456 [2024-07-24 19:06:39.306799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.456 [2024-07-24 19:06:39.307065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.456 [2024-07-24 19:06:39.307331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.456 [2024-07-24 19:06:39.307344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.456 [2024-07-24 19:06:39.307354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.456 [2024-07-24 19:06:39.311606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.456 [2024-07-24 19:06:39.320881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.456 [2024-07-24 19:06:39.321465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.456 [2024-07-24 19:06:39.321487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.456 [2024-07-24 19:06:39.321498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.456 [2024-07-24 19:06:39.321771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.456 [2024-07-24 19:06:39.322038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.456 [2024-07-24 19:06:39.322051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.456 [2024-07-24 19:06:39.322060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.456 [2024-07-24 19:06:39.326297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.456 [2024-07-24 19:06:39.335572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.457 [2024-07-24 19:06:39.336078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.457 [2024-07-24 19:06:39.336103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.457 [2024-07-24 19:06:39.336114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.457 [2024-07-24 19:06:39.336379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.457 [2024-07-24 19:06:39.336652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.457 [2024-07-24 19:06:39.336666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.457 [2024-07-24 19:06:39.336676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.457 [2024-07-24 19:06:39.340923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.457 [2024-07-24 19:06:39.350185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.457 [2024-07-24 19:06:39.350762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.457 [2024-07-24 19:06:39.350789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.457 [2024-07-24 19:06:39.350800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.457 [2024-07-24 19:06:39.351065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.457 [2024-07-24 19:06:39.351331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.457 [2024-07-24 19:06:39.351344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.457 [2024-07-24 19:06:39.351353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.457 [2024-07-24 19:06:39.355607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.457 [2024-07-24 19:06:39.364881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.457 [2024-07-24 19:06:39.365460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.457 [2024-07-24 19:06:39.365482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.457 [2024-07-24 19:06:39.365492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.457 [2024-07-24 19:06:39.365763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.457 [2024-07-24 19:06:39.366028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.457 [2024-07-24 19:06:39.366041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.457 [2024-07-24 19:06:39.366050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.457 [2024-07-24 19:06:39.370293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.457 [2024-07-24 19:06:39.379569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.457 [2024-07-24 19:06:39.380066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.457 [2024-07-24 19:06:39.380088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.457 [2024-07-24 19:06:39.380098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.457 [2024-07-24 19:06:39.380361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.457 [2024-07-24 19:06:39.380633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.457 [2024-07-24 19:06:39.380646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.457 [2024-07-24 19:06:39.380656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.457 [2024-07-24 19:06:39.384911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.457 [2024-07-24 19:06:39.394212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.457 [2024-07-24 19:06:39.394813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.457 [2024-07-24 19:06:39.394857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.457 [2024-07-24 19:06:39.394883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.457 [2024-07-24 19:06:39.395148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.457 [2024-07-24 19:06:39.395418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.457 [2024-07-24 19:06:39.395431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.457 [2024-07-24 19:06:39.395441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.457 [2024-07-24 19:06:39.399697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.457 [2024-07-24 19:06:39.408979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.457 [2024-07-24 19:06:39.409594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.457 [2024-07-24 19:06:39.409621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.457 [2024-07-24 19:06:39.409632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.457 [2024-07-24 19:06:39.409897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.457 [2024-07-24 19:06:39.410162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.457 [2024-07-24 19:06:39.410175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.457 [2024-07-24 19:06:39.410185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.457 [2024-07-24 19:06:39.414426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.457 [2024-07-24 19:06:39.423712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.457 [2024-07-24 19:06:39.424145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.457 [2024-07-24 19:06:39.424168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.457 [2024-07-24 19:06:39.424179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.457 [2024-07-24 19:06:39.424443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.457 [2024-07-24 19:06:39.424716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.457 [2024-07-24 19:06:39.424730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.457 [2024-07-24 19:06:39.424740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.457 [2024-07-24 19:06:39.428987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.457 [2024-07-24 19:06:39.438277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.457 [2024-07-24 19:06:39.438821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.457 [2024-07-24 19:06:39.438844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.458 [2024-07-24 19:06:39.438855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.458 [2024-07-24 19:06:39.439120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.458 [2024-07-24 19:06:39.439385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.458 [2024-07-24 19:06:39.439398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.458 [2024-07-24 19:06:39.439407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.458 [2024-07-24 19:06:39.443664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.458 [2024-07-24 19:06:39.452959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.458 [2024-07-24 19:06:39.453546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.458 [2024-07-24 19:06:39.453589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.458 [2024-07-24 19:06:39.453626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.458 [2024-07-24 19:06:39.454187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.458 [2024-07-24 19:06:39.454453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.458 [2024-07-24 19:06:39.454466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.458 [2024-07-24 19:06:39.454475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2684243 Killed "${NVMF_APP[@]}" "$@" 00:29:54.458 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:54.458 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:54.458 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:54.458 [2024-07-24 19:06:39.458736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.458 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:54.458 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.719 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2685822 00:29:54.719 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2685822 00:29:54.719 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:54.719 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2685822 ']' 00:29:54.719 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.719 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.719 [2024-07-24 19:06:39.467520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.719 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.719 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.719 [2024-07-24 19:06:39.468008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.719 [2024-07-24 19:06:39.468031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.719 [2024-07-24 19:06:39.468042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.719 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.719 [2024-07-24 19:06:39.468307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.719 [2024-07-24 19:06:39.468572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.719 [2024-07-24 19:06:39.468586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.719 [2024-07-24 19:06:39.468595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.719 [2024-07-24 19:06:39.472852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.719 [2024-07-24 19:06:39.482148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.719 [2024-07-24 19:06:39.482723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.719 [2024-07-24 19:06:39.482745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.719 [2024-07-24 19:06:39.482756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.719 [2024-07-24 19:06:39.483021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.719 [2024-07-24 19:06:39.483287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.719 [2024-07-24 19:06:39.483300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.719 [2024-07-24 19:06:39.483309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.719 [2024-07-24 19:06:39.487568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.719 [2024-07-24 19:06:39.496871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.719 [2024-07-24 19:06:39.497440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.719 [2024-07-24 19:06:39.497463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.719 [2024-07-24 19:06:39.497473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.719 [2024-07-24 19:06:39.497744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.719 [2024-07-24 19:06:39.498011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.719 [2024-07-24 19:06:39.498024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.719 [2024-07-24 19:06:39.498034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.719 [2024-07-24 19:06:39.502288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.719 [2024-07-24 19:06:39.511594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.719 [2024-07-24 19:06:39.512075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.719 [2024-07-24 19:06:39.512097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.719 [2024-07-24 19:06:39.512108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.719 [2024-07-24 19:06:39.512373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.719 [2024-07-24 19:06:39.512646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.719 [2024-07-24 19:06:39.512660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.719 [2024-07-24 19:06:39.512670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.719 [2024-07-24 19:06:39.516922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.719 [2024-07-24 19:06:39.521305] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:29:54.719 [2024-07-24 19:06:39.521358] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.719 [2024-07-24 19:06:39.526206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.719 [2024-07-24 19:06:39.526773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.719 [2024-07-24 19:06:39.526796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.719 [2024-07-24 19:06:39.526806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.719 [2024-07-24 19:06:39.527072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.719 [2024-07-24 19:06:39.527338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.720 [2024-07-24 19:06:39.527351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.720 [2024-07-24 19:06:39.527361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.720 [2024-07-24 19:06:39.531611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.720 [2024-07-24 19:06:39.540753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.720 [2024-07-24 19:06:39.541261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.720 [2024-07-24 19:06:39.541287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.720 [2024-07-24 19:06:39.541298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.720 [2024-07-24 19:06:39.541564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.720 [2024-07-24 19:06:39.541838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.720 [2024-07-24 19:06:39.541851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.720 [2024-07-24 19:06:39.541860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.720 [2024-07-24 19:06:39.546106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.720 [2024-07-24 19:06:39.555373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.720 [2024-07-24 19:06:39.555958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.720 [2024-07-24 19:06:39.555980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.720 [2024-07-24 19:06:39.555991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.720 [2024-07-24 19:06:39.556254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.720 [2024-07-24 19:06:39.556519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.720 [2024-07-24 19:06:39.556530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.720 [2024-07-24 19:06:39.556540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.720 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.720 [2024-07-24 19:06:39.560785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.720 [2024-07-24 19:06:39.570062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.720 [2024-07-24 19:06:39.570616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.720 [2024-07-24 19:06:39.570638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.720 [2024-07-24 19:06:39.570652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.720 [2024-07-24 19:06:39.570917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.720 [2024-07-24 19:06:39.571181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.720 [2024-07-24 19:06:39.571193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.720 [2024-07-24 19:06:39.571202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.720 [2024-07-24 19:06:39.575447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.720 [2024-07-24 19:06:39.584717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.720 [2024-07-24 19:06:39.585274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.720 [2024-07-24 19:06:39.585295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.720 [2024-07-24 19:06:39.585305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.720 [2024-07-24 19:06:39.585569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.720 [2024-07-24 19:06:39.585841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.720 [2024-07-24 19:06:39.585853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.720 [2024-07-24 19:06:39.585862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.720 [2024-07-24 19:06:39.590116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.720 [2024-07-24 19:06:39.599373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.720 [2024-07-24 19:06:39.599933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.720 [2024-07-24 19:06:39.599955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.720 [2024-07-24 19:06:39.599965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.720 [2024-07-24 19:06:39.600227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.720 [2024-07-24 19:06:39.600491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.720 [2024-07-24 19:06:39.600503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.720 [2024-07-24 19:06:39.600512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.720 [2024-07-24 19:06:39.604770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.720 [2024-07-24 19:06:39.608968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:54.720 [2024-07-24 19:06:39.614044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.720 [2024-07-24 19:06:39.614614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.720 [2024-07-24 19:06:39.614636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.720 [2024-07-24 19:06:39.614646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.720 [2024-07-24 19:06:39.614911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.720 [2024-07-24 19:06:39.615181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.720 [2024-07-24 19:06:39.615192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.720 [2024-07-24 19:06:39.615202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.720 [2024-07-24 19:06:39.619445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.720 [2024-07-24 19:06:39.628714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.720 [2024-07-24 19:06:39.629268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.720 [2024-07-24 19:06:39.629290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.720 [2024-07-24 19:06:39.629301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.720 [2024-07-24 19:06:39.629565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.720 [2024-07-24 19:06:39.629836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.720 [2024-07-24 19:06:39.629849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.720 [2024-07-24 19:06:39.629858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.720 [2024-07-24 19:06:39.634092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.720 [2024-07-24 19:06:39.643352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.720 [2024-07-24 19:06:39.643909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.720 [2024-07-24 19:06:39.643931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.720 [2024-07-24 19:06:39.643941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.720 [2024-07-24 19:06:39.644205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.720 [2024-07-24 19:06:39.644469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.720 [2024-07-24 19:06:39.644481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.720 [2024-07-24 19:06:39.644489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.720 [2024-07-24 19:06:39.648739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.720 [2024-07-24 19:06:39.658004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.720 [2024-07-24 19:06:39.658558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.720 [2024-07-24 19:06:39.658579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.720 [2024-07-24 19:06:39.658590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.721 [2024-07-24 19:06:39.658860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.721 [2024-07-24 19:06:39.659125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.721 [2024-07-24 19:06:39.659137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.721 [2024-07-24 19:06:39.659146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.721 [2024-07-24 19:06:39.663395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.721 [2024-07-24 19:06:39.672681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.721 [2024-07-24 19:06:39.673284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.721 [2024-07-24 19:06:39.673309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.721 [2024-07-24 19:06:39.673320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.721 [2024-07-24 19:06:39.673584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.721 [2024-07-24 19:06:39.673856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.721 [2024-07-24 19:06:39.673868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.721 [2024-07-24 19:06:39.673878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.721 [2024-07-24 19:06:39.678119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.721 [2024-07-24 19:06:39.687374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.721 [2024-07-24 19:06:39.687932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.721 [2024-07-24 19:06:39.687954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.721 [2024-07-24 19:06:39.687965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.721 [2024-07-24 19:06:39.688230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.721 [2024-07-24 19:06:39.688495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.721 [2024-07-24 19:06:39.688507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.721 [2024-07-24 19:06:39.688516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.721 [2024-07-24 19:06:39.692776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.721 [2024-07-24 19:06:39.702036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.721 [2024-07-24 19:06:39.702618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.721 [2024-07-24 19:06:39.702640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.721 [2024-07-24 19:06:39.702650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.721 [2024-07-24 19:06:39.702915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.721 [2024-07-24 19:06:39.703179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.721 [2024-07-24 19:06:39.703191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.721 [2024-07-24 19:06:39.703200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.721 [2024-07-24 19:06:39.707462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.721 [2024-07-24 19:06:39.715395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.721 [2024-07-24 19:06:39.715432] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.721 [2024-07-24 19:06:39.715446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.721 [2024-07-24 19:06:39.715463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.721 [2024-07-24 19:06:39.715473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.721 [2024-07-24 19:06:39.715772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.721 [2024-07-24 19:06:39.715810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.721 [2024-07-24 19:06:39.715812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.721 [2024-07-24 19:06:39.716743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.721 [2024-07-24 19:06:39.717327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.721 [2024-07-24 19:06:39.717349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.721 [2024-07-24 19:06:39.717359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.721 [2024-07-24 19:06:39.717631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.721 [2024-07-24 19:06:39.717897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.721 [2024-07-24 19:06:39.717909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.721 [2024-07-24 19:06:39.717918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.721 [2024-07-24 19:06:39.722212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.981 [2024-07-24 19:06:39.731497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.981 [2024-07-24 19:06:39.732070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-24 19:06:39.732094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.981 [2024-07-24 19:06:39.732105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.981 [2024-07-24 19:06:39.732370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.981 [2024-07-24 19:06:39.732642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.981 [2024-07-24 19:06:39.732655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.981 [2024-07-24 19:06:39.732665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.981 [2024-07-24 19:06:39.736911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.981 [2024-07-24 19:06:39.746188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.981 [2024-07-24 19:06:39.746759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.981 [2024-07-24 19:06:39.746784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.981 [2024-07-24 19:06:39.746795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.982 [2024-07-24 19:06:39.747059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.982 [2024-07-24 19:06:39.747324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.982 [2024-07-24 19:06:39.747336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.982 [2024-07-24 19:06:39.747346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.982 [2024-07-24 19:06:39.751599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.982 [2024-07-24 19:06:39.760893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.982 [2024-07-24 19:06:39.761455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-24 19:06:39.761477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.982 [2024-07-24 19:06:39.761488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.982 [2024-07-24 19:06:39.761759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.982 [2024-07-24 19:06:39.762027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.982 [2024-07-24 19:06:39.762039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.982 [2024-07-24 19:06:39.762048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.982 [2024-07-24 19:06:39.766343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.982 [2024-07-24 19:06:39.775642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.982 [2024-07-24 19:06:39.776237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-24 19:06:39.776261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.982 [2024-07-24 19:06:39.776272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.982 [2024-07-24 19:06:39.776537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.982 [2024-07-24 19:06:39.776811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.982 [2024-07-24 19:06:39.776824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.982 [2024-07-24 19:06:39.776834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.982 [2024-07-24 19:06:39.781073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.982 [2024-07-24 19:06:39.790363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.982 [2024-07-24 19:06:39.790878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-24 19:06:39.790900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.982 [2024-07-24 19:06:39.790911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.982 [2024-07-24 19:06:39.791175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.982 [2024-07-24 19:06:39.791440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.982 [2024-07-24 19:06:39.791451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.982 [2024-07-24 19:06:39.791460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.982 [2024-07-24 19:06:39.795702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.982 [2024-07-24 19:06:39.804985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.982 [2024-07-24 19:06:39.805544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-24 19:06:39.805566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.982 [2024-07-24 19:06:39.805581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.982 [2024-07-24 19:06:39.805852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.982 [2024-07-24 19:06:39.806117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.982 [2024-07-24 19:06:39.806129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.982 [2024-07-24 19:06:39.806138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.982 [2024-07-24 19:06:39.810383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.982 [2024-07-24 19:06:39.819656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.982 [2024-07-24 19:06:39.820212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-24 19:06:39.820233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.982 [2024-07-24 19:06:39.820243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.982 [2024-07-24 19:06:39.820506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.982 [2024-07-24 19:06:39.820777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.982 [2024-07-24 19:06:39.820790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.982 [2024-07-24 19:06:39.820799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.982 [2024-07-24 19:06:39.825044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.982 [2024-07-24 19:06:39.834309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.982 [2024-07-24 19:06:39.834882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-24 19:06:39.834904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.982 [2024-07-24 19:06:39.834914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.982 [2024-07-24 19:06:39.835178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.982 [2024-07-24 19:06:39.835442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.982 [2024-07-24 19:06:39.835454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.982 [2024-07-24 19:06:39.835463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.982 [2024-07-24 19:06:39.839710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.982 [2024-07-24 19:06:39.848978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.982 [2024-07-24 19:06:39.849532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-24 19:06:39.849553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.982 [2024-07-24 19:06:39.849563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.982 [2024-07-24 19:06:39.849835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.982 [2024-07-24 19:06:39.850101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.982 [2024-07-24 19:06:39.850120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.982 [2024-07-24 19:06:39.850129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.982 [2024-07-24 19:06:39.854372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.982 [2024-07-24 19:06:39.863647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.982 [2024-07-24 19:06:39.864199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-24 19:06:39.864221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.982 [2024-07-24 19:06:39.864231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.982 [2024-07-24 19:06:39.864494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.982 [2024-07-24 19:06:39.864764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.982 [2024-07-24 19:06:39.864776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.982 [2024-07-24 19:06:39.864786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.982 [2024-07-24 19:06:39.869023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.982 [2024-07-24 19:06:39.878284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.982 [2024-07-24 19:06:39.878821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-24 19:06:39.878844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.982 [2024-07-24 19:06:39.878854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.982 [2024-07-24 19:06:39.879119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.982 [2024-07-24 19:06:39.879384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.982 [2024-07-24 19:06:39.879395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.982 [2024-07-24 19:06:39.879405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.982 [2024-07-24 19:06:39.883652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.982 [2024-07-24 19:06:39.892917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.982 [2024-07-24 19:06:39.893476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.982 [2024-07-24 19:06:39.893498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.983 [2024-07-24 19:06:39.893509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.983 [2024-07-24 19:06:39.893778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.983 [2024-07-24 19:06:39.894042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.983 [2024-07-24 19:06:39.894054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.983 [2024-07-24 19:06:39.894063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.983 [2024-07-24 19:06:39.898303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.983 [2024-07-24 19:06:39.907591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.983 [2024-07-24 19:06:39.908159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-24 19:06:39.908180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.983 [2024-07-24 19:06:39.908190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.983 [2024-07-24 19:06:39.908454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.983 [2024-07-24 19:06:39.908723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.983 [2024-07-24 19:06:39.908736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.983 [2024-07-24 19:06:39.908745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.983 [2024-07-24 19:06:39.913031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.983 [2024-07-24 19:06:39.922287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.983 [2024-07-24 19:06:39.922848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-24 19:06:39.922870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.983 [2024-07-24 19:06:39.922880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.983 [2024-07-24 19:06:39.923144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.983 [2024-07-24 19:06:39.923407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.983 [2024-07-24 19:06:39.923419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.983 [2024-07-24 19:06:39.923428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.983 [2024-07-24 19:06:39.927672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.983 [2024-07-24 19:06:39.936925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.983 [2024-07-24 19:06:39.937485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-24 19:06:39.937506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.983 [2024-07-24 19:06:39.937517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.983 [2024-07-24 19:06:39.937785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.983 [2024-07-24 19:06:39.938050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.983 [2024-07-24 19:06:39.938062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.983 [2024-07-24 19:06:39.938072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.983 [2024-07-24 19:06:39.942318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.983 [2024-07-24 19:06:39.951581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.983 [2024-07-24 19:06:39.952138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-24 19:06:39.952160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.983 [2024-07-24 19:06:39.952171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.983 [2024-07-24 19:06:39.952439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.983 [2024-07-24 19:06:39.952710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.983 [2024-07-24 19:06:39.952722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.983 [2024-07-24 19:06:39.952731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.983 [2024-07-24 19:06:39.956969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.983 [2024-07-24 19:06:39.966243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.983 [2024-07-24 19:06:39.966801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-24 19:06:39.966822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.983 [2024-07-24 19:06:39.966832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.983 [2024-07-24 19:06:39.967096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.983 [2024-07-24 19:06:39.967360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.983 [2024-07-24 19:06:39.967371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.983 [2024-07-24 19:06:39.967380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.983 [2024-07-24 19:06:39.971622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.983 [2024-07-24 19:06:39.980888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.983 [2024-07-24 19:06:39.981442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.983 [2024-07-24 19:06:39.981465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:54.983 [2024-07-24 19:06:39.981475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:54.983 [2024-07-24 19:06:39.981743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:54.983 [2024-07-24 19:06:39.982007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.983 [2024-07-24 19:06:39.982019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.983 [2024-07-24 19:06:39.982028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.983 [2024-07-24 19:06:39.986264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.244 [2024-07-24 19:06:39.995536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.244 [2024-07-24 19:06:39.996096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.244 [2024-07-24 19:06:39.996118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.244 [2024-07-24 19:06:39.996128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.244 [2024-07-24 19:06:39.996392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.244 [2024-07-24 19:06:39.996662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.244 [2024-07-24 19:06:39.996674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.244 [2024-07-24 19:06:39.996688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.244 [2024-07-24 19:06:40.000930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.244 [2024-07-24 19:06:40.010215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.244 [2024-07-24 19:06:40.010701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.244 [2024-07-24 19:06:40.010723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.244 [2024-07-24 19:06:40.010733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.244 [2024-07-24 19:06:40.010996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.244 [2024-07-24 19:06:40.011262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.244 [2024-07-24 19:06:40.011273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.244 [2024-07-24 19:06:40.011283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.244 [2024-07-24 19:06:40.015538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.244 [2024-07-24 19:06:40.024820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.244 [2024-07-24 19:06:40.025341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.244 [2024-07-24 19:06:40.025362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.244 [2024-07-24 19:06:40.025372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.244 [2024-07-24 19:06:40.025643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.244 [2024-07-24 19:06:40.025909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.244 [2024-07-24 19:06:40.025921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.244 [2024-07-24 19:06:40.025930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.244 [2024-07-24 19:06:40.030169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.244 [2024-07-24 19:06:40.039436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.244 [2024-07-24 19:06:40.040019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.244 [2024-07-24 19:06:40.040041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.244 [2024-07-24 19:06:40.040051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.244 [2024-07-24 19:06:40.040314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.244 [2024-07-24 19:06:40.040579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.244 [2024-07-24 19:06:40.040591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.244 [2024-07-24 19:06:40.040600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.244 [2024-07-24 19:06:40.044847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.244 [2024-07-24 19:06:40.054107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.244 [2024-07-24 19:06:40.054667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.244 [2024-07-24 19:06:40.054687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.244 [2024-07-24 19:06:40.054698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.244 [2024-07-24 19:06:40.054962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.244 [2024-07-24 19:06:40.055225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.244 [2024-07-24 19:06:40.055237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.244 [2024-07-24 19:06:40.055246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.244 [2024-07-24 19:06:40.059483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.244 [2024-07-24 19:06:40.068751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.244 [2024-07-24 19:06:40.069309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.244 [2024-07-24 19:06:40.069331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.244 [2024-07-24 19:06:40.069341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.244 [2024-07-24 19:06:40.069611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.244 [2024-07-24 19:06:40.069876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.244 [2024-07-24 19:06:40.069888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.244 [2024-07-24 19:06:40.069897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.244 [2024-07-24 19:06:40.074141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.244 [2024-07-24 19:06:40.083401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.244 [2024-07-24 19:06:40.083984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.244 [2024-07-24 19:06:40.084005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.244 [2024-07-24 19:06:40.084015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.244 [2024-07-24 19:06:40.084279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.244 [2024-07-24 19:06:40.084543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.244 [2024-07-24 19:06:40.084555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.244 [2024-07-24 19:06:40.084564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.244 [2024-07-24 19:06:40.088812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.244 [2024-07-24 19:06:40.098090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.244 [2024-07-24 19:06:40.098664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.244 [2024-07-24 19:06:40.098686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.244 [2024-07-24 19:06:40.098696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.244 [2024-07-24 19:06:40.098964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.244 [2024-07-24 19:06:40.099229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.244 [2024-07-24 19:06:40.099241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.244 [2024-07-24 19:06:40.099250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.244 [2024-07-24 19:06:40.103491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.244 [2024-07-24 19:06:40.112826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.244 [2024-07-24 19:06:40.113405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.244 [2024-07-24 19:06:40.113426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.244 [2024-07-24 19:06:40.113436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.244 [2024-07-24 19:06:40.113707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.244 [2024-07-24 19:06:40.113972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.244 [2024-07-24 19:06:40.113984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.244 [2024-07-24 19:06:40.113993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.244 [2024-07-24 19:06:40.118232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.244 [2024-07-24 19:06:40.127487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.244 [2024-07-24 19:06:40.128082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.244 [2024-07-24 19:06:40.128104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.244 [2024-07-24 19:06:40.128114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.244 [2024-07-24 19:06:40.128378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.244 [2024-07-24 19:06:40.128648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.244 [2024-07-24 19:06:40.128660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.245 [2024-07-24 19:06:40.128670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.245 [2024-07-24 19:06:40.132903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.245 [2024-07-24 19:06:40.142174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.245 [2024-07-24 19:06:40.142751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.245 [2024-07-24 19:06:40.142773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.245 [2024-07-24 19:06:40.142783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.245 [2024-07-24 19:06:40.143046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.245 [2024-07-24 19:06:40.143310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.245 [2024-07-24 19:06:40.143321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.245 [2024-07-24 19:06:40.143334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.245 [2024-07-24 19:06:40.147577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.245 [2024-07-24 19:06:40.156844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.245 [2024-07-24 19:06:40.157399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.245 [2024-07-24 19:06:40.157420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.245 [2024-07-24 19:06:40.157430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.245 [2024-07-24 19:06:40.157699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.245 [2024-07-24 19:06:40.157965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.245 [2024-07-24 19:06:40.157976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.245 [2024-07-24 19:06:40.157985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.245 [2024-07-24 19:06:40.162230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.245 [2024-07-24 19:06:40.171491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.245 [2024-07-24 19:06:40.171974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.245 [2024-07-24 19:06:40.171996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.245 [2024-07-24 19:06:40.172006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.245 [2024-07-24 19:06:40.172270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.245 [2024-07-24 19:06:40.172534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.245 [2024-07-24 19:06:40.172546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.245 [2024-07-24 19:06:40.172555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.245 [2024-07-24 19:06:40.176795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.245 [2024-07-24 19:06:40.186054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.245 [2024-07-24 19:06:40.186611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.245 [2024-07-24 19:06:40.186633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.245 [2024-07-24 19:06:40.186643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.245 [2024-07-24 19:06:40.186907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.245 [2024-07-24 19:06:40.187172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.245 [2024-07-24 19:06:40.187183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.245 [2024-07-24 19:06:40.187192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.245 [2024-07-24 19:06:40.191445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.245 [2024-07-24 19:06:40.200715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.245 [2024-07-24 19:06:40.201269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.245 [2024-07-24 19:06:40.201294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.245 [2024-07-24 19:06:40.201305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.245 [2024-07-24 19:06:40.201568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.245 [2024-07-24 19:06:40.201839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.245 [2024-07-24 19:06:40.201852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.245 [2024-07-24 19:06:40.201861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.245 [2024-07-24 19:06:40.206117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.245 [2024-07-24 19:06:40.215389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.245 [2024-07-24 19:06:40.215907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.245 [2024-07-24 19:06:40.215930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.245 [2024-07-24 19:06:40.215940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.245 [2024-07-24 19:06:40.216203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.245 [2024-07-24 19:06:40.216470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.245 [2024-07-24 19:06:40.216483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.245 [2024-07-24 19:06:40.216492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.245 [2024-07-24 19:06:40.220744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.245 [2024-07-24 19:06:40.230004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.245 [2024-07-24 19:06:40.230565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.245 [2024-07-24 19:06:40.230586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.245 [2024-07-24 19:06:40.230596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.245 [2024-07-24 19:06:40.230866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.245 [2024-07-24 19:06:40.231130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.245 [2024-07-24 19:06:40.231142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.245 [2024-07-24 19:06:40.231151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.245 [2024-07-24 19:06:40.235399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.245 [2024-07-24 19:06:40.244676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.245 [2024-07-24 19:06:40.245138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.245 [2024-07-24 19:06:40.245160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.245 [2024-07-24 19:06:40.245170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.245 [2024-07-24 19:06:40.245434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.245 [2024-07-24 19:06:40.245708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.245 [2024-07-24 19:06:40.245721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.245 [2024-07-24 19:06:40.245730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.245 [2024-07-24 19:06:40.249972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.505 [2024-07-24 19:06:40.259231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.505 [2024-07-24 19:06:40.259810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.505 [2024-07-24 19:06:40.259832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.505 [2024-07-24 19:06:40.259842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.505 [2024-07-24 19:06:40.260107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.505 [2024-07-24 19:06:40.260371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.505 [2024-07-24 19:06:40.260383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.505 [2024-07-24 19:06:40.260392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.505 [2024-07-24 19:06:40.264645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.505 [2024-07-24 19:06:40.273911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.505 [2024-07-24 19:06:40.274485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.505 [2024-07-24 19:06:40.274506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.505 [2024-07-24 19:06:40.274516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.505 [2024-07-24 19:06:40.274786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.505 [2024-07-24 19:06:40.275052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.505 [2024-07-24 19:06:40.275063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.505 [2024-07-24 19:06:40.275072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.505 [2024-07-24 19:06:40.279315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.505 [2024-07-24 19:06:40.288588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.505 [2024-07-24 19:06:40.289169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.505 [2024-07-24 19:06:40.289191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.506 [2024-07-24 19:06:40.289201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.506 [2024-07-24 19:06:40.289465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.506 [2024-07-24 19:06:40.289734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.506 [2024-07-24 19:06:40.289747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.506 [2024-07-24 19:06:40.289756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.506 [2024-07-24 19:06:40.294006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.506 [2024-07-24 19:06:40.303276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.506 [2024-07-24 19:06:40.303783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.506 [2024-07-24 19:06:40.303804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.506 [2024-07-24 19:06:40.303814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.506 [2024-07-24 19:06:40.304078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.506 [2024-07-24 19:06:40.304342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.506 [2024-07-24 19:06:40.304353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.506 [2024-07-24 19:06:40.304363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.506 [2024-07-24 19:06:40.308615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.506 [2024-07-24 19:06:40.317888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.506 [2024-07-24 19:06:40.318464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.506 [2024-07-24 19:06:40.318485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.506 [2024-07-24 19:06:40.318495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.506 [2024-07-24 19:06:40.318767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.506 [2024-07-24 19:06:40.319033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.506 [2024-07-24 19:06:40.319044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.506 [2024-07-24 19:06:40.319054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.506 [2024-07-24 19:06:40.323302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.506 [2024-07-24 19:06:40.332574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.506 [2024-07-24 19:06:40.333154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.506 [2024-07-24 19:06:40.333176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.506 [2024-07-24 19:06:40.333186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.506 [2024-07-24 19:06:40.333450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.506 [2024-07-24 19:06:40.333721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.506 [2024-07-24 19:06:40.333734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.506 [2024-07-24 19:06:40.333743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.506 [2024-07-24 19:06:40.337985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.506 [2024-07-24 19:06:40.347254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.506 [2024-07-24 19:06:40.347808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.506 [2024-07-24 19:06:40.347829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.506 [2024-07-24 19:06:40.347842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.506 [2024-07-24 19:06:40.348107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.506 [2024-07-24 19:06:40.348371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.506 [2024-07-24 19:06:40.348383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.506 [2024-07-24 19:06:40.348392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.506 [2024-07-24 19:06:40.352645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.506 [2024-07-24 19:06:40.361911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.506 [2024-07-24 19:06:40.362403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.506 [2024-07-24 19:06:40.362425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.506 [2024-07-24 19:06:40.362435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.506 [2024-07-24 19:06:40.362705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.506 [2024-07-24 19:06:40.362971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.506 [2024-07-24 19:06:40.362983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.506 [2024-07-24 19:06:40.362993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.506 [2024-07-24 19:06:40.367235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.506 [2024-07-24 19:06:40.376500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.506 [2024-07-24 19:06:40.377089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.506 [2024-07-24 19:06:40.377110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.506 [2024-07-24 19:06:40.377121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.506 [2024-07-24 19:06:40.377385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.506 [2024-07-24 19:06:40.377655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.506 [2024-07-24 19:06:40.377667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.506 [2024-07-24 19:06:40.377677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.506 [2024-07-24 19:06:40.381925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.506 [2024-07-24 19:06:40.391201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.506 [2024-07-24 19:06:40.391712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.506 [2024-07-24 19:06:40.391733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.506 [2024-07-24 19:06:40.391744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.506 [2024-07-24 19:06:40.392008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.506 [2024-07-24 19:06:40.392273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.506 [2024-07-24 19:06:40.392288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.506 [2024-07-24 19:06:40.392297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.506 [2024-07-24 19:06:40.396547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.506 [2024-07-24 19:06:40.405822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.506 [2024-07-24 19:06:40.406289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.506 [2024-07-24 19:06:40.406310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.506 [2024-07-24 19:06:40.406321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.506 [2024-07-24 19:06:40.406584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.506 [2024-07-24 19:06:40.406857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.506 [2024-07-24 19:06:40.406870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.506 [2024-07-24 19:06:40.406879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.506 [2024-07-24 19:06:40.411117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.506 [2024-07-24 19:06:40.420388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.506 [2024-07-24 19:06:40.420972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.506 [2024-07-24 19:06:40.420993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.506 [2024-07-24 19:06:40.421004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.506 [2024-07-24 19:06:40.421268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.506 [2024-07-24 19:06:40.421534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.506 [2024-07-24 19:06:40.421546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.507 [2024-07-24 19:06:40.421556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.507 [2024-07-24 19:06:40.425805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.507 [2024-07-24 19:06:40.435063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.507 [2024-07-24 19:06:40.435668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.507 [2024-07-24 19:06:40.435690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.507 [2024-07-24 19:06:40.435700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.507 [2024-07-24 19:06:40.435963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.507 [2024-07-24 19:06:40.436227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.507 [2024-07-24 19:06:40.436239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.507 [2024-07-24 19:06:40.436248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.507 [2024-07-24 19:06:40.440493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.507 [2024-07-24 19:06:40.449773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.507 [2024-07-24 19:06:40.450362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.507 [2024-07-24 19:06:40.450383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.507 [2024-07-24 19:06:40.450393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.507 [2024-07-24 19:06:40.450663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.507 [2024-07-24 19:06:40.450928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.507 [2024-07-24 19:06:40.450940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.507 [2024-07-24 19:06:40.450950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.507 [2024-07-24 19:06:40.455193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.507 [2024-07-24 19:06:40.464466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.507 [2024-07-24 19:06:40.465051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.507 [2024-07-24 19:06:40.465072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.507 [2024-07-24 19:06:40.465082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.507 [2024-07-24 19:06:40.465345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.507 [2024-07-24 19:06:40.465615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.507 [2024-07-24 19:06:40.465628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.507 [2024-07-24 19:06:40.465637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.507 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:55.507 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:55.507 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:55.507 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:55.507 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:55.507 [2024-07-24 19:06:40.469881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.507 [2024-07-24 19:06:40.479155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.507 [2024-07-24 19:06:40.479736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.507 [2024-07-24 19:06:40.479758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.507 [2024-07-24 19:06:40.479768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.507 [2024-07-24 19:06:40.480031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.507 [2024-07-24 19:06:40.480295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.507 [2024-07-24 19:06:40.480307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.507 [2024-07-24 19:06:40.480316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.507 [2024-07-24 19:06:40.484561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.507 [2024-07-24 19:06:40.493857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.507 [2024-07-24 19:06:40.494361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.507 [2024-07-24 19:06:40.494383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.507 [2024-07-24 19:06:40.494393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.507 [2024-07-24 19:06:40.494664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.507 [2024-07-24 19:06:40.494931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.507 [2024-07-24 19:06:40.494943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.507 [2024-07-24 19:06:40.494952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.507 [2024-07-24 19:06:40.499194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.507 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.507 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:55.507 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.507 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:55.507 [2024-07-24 19:06:40.508479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.507 [2024-07-24 19:06:40.508984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.507 [2024-07-24 19:06:40.509006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.507 [2024-07-24 19:06:40.509016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.507 [2024-07-24 19:06:40.509280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.507 [2024-07-24 19:06:40.509301] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.507 [2024-07-24 19:06:40.509544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.507 [2024-07-24 19:06:40.509556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.507 [2024-07-24 19:06:40.509565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.767 [2024-07-24 19:06:40.513811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.767 [2024-07-24 19:06:40.523081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.767 [2024-07-24 19:06:40.523635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.767 [2024-07-24 19:06:40.523657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.767 [2024-07-24 19:06:40.523667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.767 [2024-07-24 19:06:40.523931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.767 [2024-07-24 19:06:40.524195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.767 [2024-07-24 19:06:40.524206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.767 [2024-07-24 19:06:40.524216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.767 [2024-07-24 19:06:40.528461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:55.767 [2024-07-24 19:06:40.537735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.767 [2024-07-24 19:06:40.538224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.767 [2024-07-24 19:06:40.538245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.767 [2024-07-24 19:06:40.538256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.767 [2024-07-24 19:06:40.538519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.767 [2024-07-24 19:06:40.538791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.767 [2024-07-24 19:06:40.538804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.767 [2024-07-24 19:06:40.538813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.767 [2024-07-24 19:06:40.543059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.767 [2024-07-24 19:06:40.552350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.767 [2024-07-24 19:06:40.552797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.767 [2024-07-24 19:06:40.552820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.767 [2024-07-24 19:06:40.552831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.767 [2024-07-24 19:06:40.553095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.767 [2024-07-24 19:06:40.553361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.767 [2024-07-24 19:06:40.553374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.767 [2024-07-24 19:06:40.553383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.767 Malloc0 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:55.767 [2024-07-24 19:06:40.557793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.767 [2024-07-24 19:06:40.567079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.767 [2024-07-24 19:06:40.567578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.767 [2024-07-24 19:06:40.567609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7e90 with addr=10.0.0.2, port=4420 00:29:55.767 [2024-07-24 19:06:40.567621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7e90 is same with the state(6) to be set 00:29:55.767 [2024-07-24 19:06:40.567886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7e90 (9): Bad file descriptor 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:55.767 [2024-07-24 19:06:40.568159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.767 [2024-07-24 19:06:40.568172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.767 [2024-07-24 19:06:40.568181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:55.767 [2024-07-24 19:06:40.572430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:55.767 [2024-07-24 19:06:40.579221] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.767 [2024-07-24 19:06:40.581732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.767 19:06:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2684791 00:29:55.767 [2024-07-24 19:06:40.743347] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:05.741 00:30:05.741 Latency(us) 00:30:05.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.741 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:05.741 Verification LBA range: start 0x0 length 0x4000 00:30:05.741 Nvme1n1 : 15.03 3084.77 12.05 8691.94 0.00 10838.32 953.25 38606.66 00:30:05.741 =================================================================================================================== 00:30:05.741 Total : 3084.77 12.05 8691.94 0.00 10838.32 953.25 38606.66 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:05.741 rmmod nvme_tcp 00:30:05.741 rmmod nvme_fabrics 00:30:05.741 rmmod nvme_keyring 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2685822 ']' 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2685822 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2685822 ']' 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2685822 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2685822 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2685822' 00:30:05.741 killing process with pid 2685822 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2685822 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2685822 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.741 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.679 19:06:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:06.679 00:30:06.679 real 0m26.813s 00:30:06.679 user 1m3.886s 00:30:06.679 sys 0m6.445s 00:30:06.679 19:06:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.679 19:06:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.679 ************************************ 00:30:06.679 END TEST nvmf_bdevperf 00:30:06.679 ************************************ 00:30:06.679 19:06:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:06.679 19:06:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:06.679 19:06:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:06.679 19:06:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.679 ************************************ 00:30:06.679 START TEST nvmf_target_disconnect 00:30:06.679 ************************************ 00:30:06.679 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:06.938 * Looking for test storage... 00:30:06.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.938 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:06.939 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:13.509 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:13.510 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:13.510 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:13.510 Found net devices under 0000:af:00.0: cvl_0_0 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:13.510 Found net devices under 0000:af:00.1: cvl_0_1 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:13.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:30:13.510 00:30:13.510 --- 10.0.0.2 ping statistics --- 00:30:13.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.510 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:30:13.510 00:30:13.510 --- 10.0.0.1 ping statistics --- 00:30:13.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.510 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.510 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:13.510 ************************************ 00:30:13.510 START TEST nvmf_target_disconnect_tc1 00:30:13.511 ************************************ 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:13.511 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.511 [2024-07-24 19:06:57.798558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.511 [2024-07-24 19:06:57.798675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeafcf0 with addr=10.0.0.2, port=4420 00:30:13.511 [2024-07-24 19:06:57.798731] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:13.511 [2024-07-24 19:06:57.798756] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:13.511 [2024-07-24 19:06:57.798774] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:13.511 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:13.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:13.511 Initializing NVMe Controllers 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:13.511 00:30:13.511 real 0m0.127s 00:30:13.511 user 0m0.051s 00:30:13.511 sys 0m0.074s 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:13.511 ************************************ 00:30:13.511 END TEST nvmf_target_disconnect_tc1 00:30:13.511 ************************************ 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:13.511 ************************************ 00:30:13.511 START TEST nvmf_target_disconnect_tc2 00:30:13.511 ************************************ 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2691199 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2691199 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2691199 ']' 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:13.511 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:13.511 [2024-07-24 19:06:57.949698] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:30:13.511 [2024-07-24 19:06:57.949759] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.511 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.511 [2024-07-24 19:06:58.072579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:13.511 [2024-07-24 19:06:58.221821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.511 [2024-07-24 19:06:58.221891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.511 [2024-07-24 19:06:58.221913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.511 [2024-07-24 19:06:58.221932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.511 [2024-07-24 19:06:58.221947] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.511 [2024-07-24 19:06:58.222097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:13.511 [2024-07-24 19:06:58.222211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:13.511 [2024-07-24 19:06:58.222304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:13.511 [2024-07-24 19:06:58.222310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.146 Malloc0 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.146 [2024-07-24 19:06:58.970528] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.146 19:06:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.146 [2024-07-24 19:06:59.003119] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.146 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.146 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:14.146 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.146 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.146 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.146 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2691290 00:30:14.146 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:14.146 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:14.146 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.063 19:07:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2691199 00:30:16.063 19:07:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Write completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Write completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Write completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Write completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Write completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Write completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Read completed with error (sct=0, sc=8) 00:30:16.063 starting I/O failed 00:30:16.063 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 [2024-07-24 19:07:01.038501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 [2024-07-24 19:07:01.038955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Write completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.064 Read completed with error (sct=0, sc=8) 00:30:16.064 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 [2024-07-24 19:07:01.039543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Read completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 Write completed with error (sct=0, sc=8) 00:30:16.065 starting I/O failed 00:30:16.065 [2024-07-24 19:07:01.039919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.065 [2024-07-24 19:07:01.040189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.040237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.040594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.040651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.040954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.040986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.041117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.041147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.041493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.041524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.041692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.041724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.041948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.041963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.042238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.042268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.042441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.042472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.042706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.042722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.042931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.042947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.043082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.043101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.043282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.043298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.043483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.043498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.043690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.043706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.043902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.043917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.044194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.044210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.044429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.044444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.044579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.044594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.065 qpair failed and we were unable to recover it. 00:30:16.065 [2024-07-24 19:07:01.044803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.065 [2024-07-24 19:07:01.044819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.044942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.044957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.045140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.045171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.045353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.045385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.045599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.045641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.045903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.045919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.046055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.046070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.046317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.046333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.046516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.046532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.046653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.046669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.046807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.046822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.047032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.047047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.047190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.047205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.047485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.047500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.047641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.047657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.047947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.047977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.048317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.048349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.048636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.048667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.048975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.049005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.049260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.049290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.049518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.049549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.049798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.049814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.049943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.049958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.050111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.050127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.050406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.050437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.050671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.050702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.051015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.051046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.051209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.051229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.051387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.051407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.051626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.051657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.051880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.051911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.052150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.052181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.052490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.052521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.052804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.052840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.053173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.053193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.053326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.053346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.053560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.053590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.053855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.053886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.054021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.054053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.054252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.054272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.054544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.054575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.054831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.054863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.055079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.055110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.055317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.055338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.055546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.055566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.055839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.055871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.056118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.056149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.056375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.056407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.056640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.056672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.066 [2024-07-24 19:07:01.056848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.066 [2024-07-24 19:07:01.056868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.066 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.057006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.057037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.057265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.057296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.057463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.057493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.057709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.057740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.057972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.058003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.058325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.058355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.058520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.058551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.058725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.058756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.058986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.059006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.059212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.059243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.059557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.059588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.059884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.059916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.060133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.060164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.060410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.060440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.060667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.060698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.060859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.060890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.061102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.061132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.061346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.061377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.061612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.061644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.061869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.061900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.062197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.062228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.062386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.062416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.062704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.062736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.063035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.063066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.063424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.063495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.063746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.063783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.064095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.064128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.064358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.064389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.064708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.064741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.065007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.065038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.065357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.065377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.065586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.065611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.065922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.065942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.066132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.066151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.066414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.066444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.066680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.066713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.066881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.066900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.067203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.067228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.067447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.067467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.067 [2024-07-24 19:07:01.067749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.067 [2024-07-24 19:07:01.067769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.067 qpair failed and we were unable to recover it. 00:30:16.343 [2024-07-24 19:07:01.068050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.343 [2024-07-24 19:07:01.068070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.343 qpair failed and we were unable to recover it. 00:30:16.343 [2024-07-24 19:07:01.068386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.343 [2024-07-24 19:07:01.068405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.343 qpair failed and we were unable to recover it. 00:30:16.343 [2024-07-24 19:07:01.068614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.343 [2024-07-24 19:07:01.068634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.343 qpair failed and we were unable to recover it. 00:30:16.343 [2024-07-24 19:07:01.068868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.343 [2024-07-24 19:07:01.068887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.343 qpair failed and we were unable to recover it. 00:30:16.343 [2024-07-24 19:07:01.069146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.343 [2024-07-24 19:07:01.069177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.343 qpair failed and we were unable to recover it. 00:30:16.343 [2024-07-24 19:07:01.069490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.343 [2024-07-24 19:07:01.069522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.069745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.069776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.070133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.070164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.070321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.070352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.070666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.070698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.070913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.070944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.071176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.071207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.071489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.071520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.071691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.071732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.071939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.071959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.072164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.072183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.072410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.072429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.072564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.072584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.072890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.072910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.073132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.073151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.073407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.073444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.073616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.073646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.073952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.073971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.074168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.074187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.074421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.074440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.074635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.074655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.074855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.074875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.075147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.075178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.075408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.075438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.075656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.075676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.075794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.075813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.076026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.076056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.076233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.076263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.076489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.076521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.076854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.076874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.077014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.077033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.077215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.077234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.077500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.077523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.077715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.077735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.077856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.077875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.078120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.078139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.344 [2024-07-24 19:07:01.078326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.344 [2024-07-24 19:07:01.078345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.344 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.078478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.078497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.078639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.078659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.078864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.078884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.079024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.079042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.079253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.079285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.079521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.079552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.079878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.079917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.080153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.080172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.080429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.080448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.080598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.080625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.080831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.080850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.081067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.081097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.081268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.081298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.081524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.081554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.081787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.081819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.082071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.082102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.082409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.082440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.082721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.082754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.082920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.082951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.083174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.083205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.083444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.083476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.083620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.083652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.083963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.083983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.084286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.084305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.084565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.084597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.084769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.084800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.085026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.085056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.085280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.085311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.085457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.085476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.085595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.085620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.085805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.085825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.086108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.086128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.086251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.086270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.086529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.086568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.086877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.086909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.087130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.345 [2024-07-24 19:07:01.087149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.345 qpair failed and we were unable to recover it. 00:30:16.345 [2024-07-24 19:07:01.087357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.087376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.087646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.087666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.087859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.087902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.088155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.088187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.088415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.088445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.088680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.088712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.088917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.088938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.089195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.089232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.089395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.089425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.089740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.089772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.089986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.090017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.090164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.090194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.090485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.090515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.090735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.090767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.090942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.090973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.091149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.091169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.091379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.091409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.091634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.091666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.091834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.091864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.092140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.092158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.092299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.092318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.092609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.092641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.092805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.092837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.093051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.093082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.093321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.093352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.093500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.093532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.093767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.093803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.094084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.094115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.094372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.094403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.094723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.094755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.094983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.095015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.095237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.095268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.095574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.095618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.095904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.095936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.096161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.096193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.096406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.346 [2024-07-24 19:07:01.096438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.346 qpair failed and we were unable to recover it. 00:30:16.346 [2024-07-24 19:07:01.096695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.096726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.096954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.096994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.097272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.097292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.097572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.097612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.097908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.097940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.098220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.098251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.098557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.098589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.098826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.098858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.099086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.099117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.099357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.099388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.099619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.099649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.099931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.099961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.100178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.100210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.100376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.100407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.100636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.100668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.100827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.100846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.101130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.101161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.101481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.101512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.101733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.101752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.102016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.102047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.102270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.102300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.102535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.102565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.102828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.102860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.103080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.103099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.103331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.103351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.103481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.103500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.103694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.103714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.103910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.103929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.104096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.104126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.104351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.104382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.104620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.104657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.105001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.105032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.105261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.105292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.105458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.105489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.105713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.105744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.106025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.106056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.106287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.106317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.347 [2024-07-24 19:07:01.106533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.347 [2024-07-24 19:07:01.106563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.347 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.106874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.106906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.107154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.107185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.107354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.107385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.107622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.107653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.107868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.107887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.108168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.108188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.108324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.108343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.108598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.108622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.108773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.108793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.109046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.109066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.109289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.109309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.109503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.109522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.109728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.109748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.109890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.109921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.110077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.110109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.110391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.110423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.110729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.110760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.111038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.111069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.111380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.111411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.111644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.111677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.111931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.111950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.112135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.112153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.112285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.112304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.112536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.112555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.112816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.112847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.112993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.113024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.113302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.113332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.113640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.113672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.113825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.348 [2024-07-24 19:07:01.113856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.348 qpair failed and we were unable to recover it. 00:30:16.348 [2024-07-24 19:07:01.114161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.114180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.114365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.114384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.114667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.114686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.114885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.114907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.115190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.115210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.115418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.115437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.115641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.115660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.115776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.115794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.115941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.115978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.116184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.116215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.116465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.116496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.116775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.116806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.117063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.117094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.117320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.117351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.117590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.117634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.117932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.117963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.118171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.118202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.118430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.118462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.118692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.118724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.118879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.118910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.119137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.119155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.119307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.119327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.119587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.119624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.119837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.119868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.120083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.120102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.120298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.120328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.120491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.120521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.120807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.120840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.121064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.121084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.121279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.121298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.121431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.121451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.121678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.121698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.121837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.121866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.122093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.122124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.122345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.122377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.122658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.122690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.122970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.349 [2024-07-24 19:07:01.123001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.349 qpair failed and we were unable to recover it. 00:30:16.349 [2024-07-24 19:07:01.123308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.123340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.123625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.123694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.123889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.123922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.124092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.124123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.124342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.124375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.124643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.124676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.124849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.124890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.125195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.125226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.125377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.125409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.125735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.125775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.125979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.125998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.126308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.126327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.126494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.126513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.126711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.126743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.126964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.126995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.127305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.127336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.127559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.127589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.127878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.127909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.128120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.128151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.128314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.128345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.128588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.128625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.128846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.128876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.129090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.129121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.129403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.129433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.129649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.129689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.129828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.129847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.130037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.130069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.130293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.130324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.130559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.130590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.130761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.130780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.130921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.130949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.131226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.131257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.131480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.131512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.131801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.131833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.132139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.132171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.132382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.132413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.132665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.350 [2024-07-24 19:07:01.132696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.350 qpair failed and we were unable to recover it. 00:30:16.350 [2024-07-24 19:07:01.132921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.132952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.133172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.133192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.133417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.133436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.133585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.133624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.133905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.133937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.134219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.134250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.134418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.134450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.134674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.134706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.134983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.135003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.135228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.135251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.135552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.135583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.135886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.135917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.136151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.136182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.136486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.136518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.136647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.136679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.136988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.137019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.137322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.137353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.137568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.137600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.137789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.137820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.138028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.138047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.138331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.138361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.138586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.138627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.138909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.138942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.139195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.139216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.139470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.139506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.139746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.139777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.139999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.140018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.140240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.140260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.140448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.140467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.140668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.140688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.140815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.140834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.141116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.141146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.141364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.141395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.141637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.141669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.141849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.141880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.142130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.142149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.142407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.351 [2024-07-24 19:07:01.142443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.351 qpair failed and we were unable to recover it. 00:30:16.351 [2024-07-24 19:07:01.142766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.142798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.142945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.142975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.143234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.143264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.143571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.143609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.143867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.143898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.144072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.144104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.144361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.144391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.144619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.144652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.144876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.144906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.145207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.145227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.145433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.145452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.145650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.145670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.145885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.145908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.146046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.146066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.146325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.146344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.146551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.146570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.146758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.146779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.146972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.146991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.147216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.147235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.147356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.147375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.147577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.147596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.147804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.147836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.148085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.148115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.148395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.148426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.148676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.148708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.148876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.148907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.149222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.149252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.149463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.149494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.149722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.149754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.149984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.150004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.150195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.150214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.150411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.150430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.150614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.150634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.150763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.150782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.151073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.151104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.151336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.151365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.151656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.352 [2024-07-24 19:07:01.151687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.352 qpair failed and we were unable to recover it. 00:30:16.352 [2024-07-24 19:07:01.151970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.152005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.152289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.152320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.152555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.152587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.152903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.152935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.153260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.153290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.153443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.153473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.153773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.153804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.153964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.153994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.154219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.154249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.154543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.154573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.154860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.154890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.155118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.155148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.155428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.155446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.155759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.155790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.156158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.156189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.156347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.156384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.156633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.156664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.156944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.156974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.157257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.157276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.157479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.157499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.157707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.157726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.157952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.157982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.158265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.158295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.158573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.158592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.158804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.158824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.159106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.159137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.159367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.159397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.159735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.159767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.160053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.160084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.160305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.160336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.160636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.353 [2024-07-24 19:07:01.160668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.353 qpair failed and we were unable to recover it. 00:30:16.353 [2024-07-24 19:07:01.160899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.160930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.161222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.161253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.161578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.161633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.161919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.161950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.162173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.162204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.162464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.162494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.162740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.162772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.162990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.163022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.163249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.163280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.163456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.163475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.163752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.163772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.163982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.164002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.164254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.164290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.164514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.164545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.164798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.164830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.165068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.165088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.165290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.165321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.165479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.165510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.165735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.165767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.166047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.166066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.166263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.166282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.166506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.166525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.166729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.166749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.166947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.166967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.167196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.167219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.167426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.167444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.167666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.167687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.167881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.167900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.168173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.168192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.168398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.168417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.168704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.168724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.168949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.168968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.169253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.169289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.169516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.169546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.169777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.169810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.170033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.170065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.170377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.354 [2024-07-24 19:07:01.170407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.354 qpair failed and we were unable to recover it. 00:30:16.354 [2024-07-24 19:07:01.170687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.170720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.170962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.170993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.171203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.171235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.171396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.171415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.171619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.171651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.171990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.172021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.172249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.172280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.172577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.172615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.172833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.172864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.173073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.173104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.173328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.173359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.173667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.173699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.173868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.173899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.174066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.174097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.174382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.174413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.174657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.174689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.174963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.174983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.175241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.175277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.175505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.175535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.175787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.175819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.176026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.176045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.176269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.176300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.176611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.176643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.176860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.176891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.177172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.177203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.177432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.177451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.177651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.177671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.177857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.177883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.178167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.178198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.178420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.178451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.178694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.178725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.178957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.178988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.179146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.179178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.179408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.179427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.179723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.179743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.179894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.179925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.180205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.355 [2024-07-24 19:07:01.180236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.355 qpair failed and we were unable to recover it. 00:30:16.355 [2024-07-24 19:07:01.180460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.180479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.180673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.180692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.180951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.180971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.181091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.181111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.181387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.181407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.181594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.181619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.181836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.181856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.182063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.182082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.182344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.182364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.182551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.182571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.182879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.182898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.183154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.183173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.183492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.183523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.183739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.183771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.184015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.184046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.184268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.184287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.184410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.184441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.184624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.184657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.184880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.184912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.185178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.185209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.185510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.185529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.185751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.185771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.185976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.185995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.186192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.186211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.186506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.186537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.186818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.186850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.187140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.187171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.187391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.187411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.187629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.187661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.187819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.187850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.188079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.188114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.188397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.188428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.188765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.188796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.188955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.188985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.189290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.189324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.189615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.189645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.189929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.356 [2024-07-24 19:07:01.189959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.356 qpair failed and we were unable to recover it. 00:30:16.356 [2024-07-24 19:07:01.190211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.190243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.190460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.190491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.190706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.190738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.190903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.190934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.191236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.191266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.191511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.191530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.191812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.191832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.192130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.192162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.192404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.192434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.192598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.192638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.192859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.192891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.193097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.193117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.193341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.193373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.193678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.193710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.193949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.193979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.194244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.194275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.194488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.194532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.194636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.194655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.194890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.194908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.195028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.195047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.195262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.195282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.195565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.195627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.195939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.195969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.196179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.196210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.196548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.196579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.196896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.196928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.197159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.197178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.197462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.197481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.197711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.197731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.197928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.197948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.198082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.198100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.198291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.198310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.198436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.198455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.198672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.198695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.198827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.357 [2024-07-24 19:07:01.198846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.357 qpair failed and we were unable to recover it. 00:30:16.357 [2024-07-24 19:07:01.199067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.199086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.199278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.199297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.199591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.199642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.199821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.199852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.200137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.200168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.200462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.200502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.200638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.200670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.200919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.200950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.201197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.201227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.201454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.201473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.201673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.201706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.201937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.201969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.202183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.202202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.202407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.202438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.202745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.202776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.203010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.203041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.203275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.203294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.203585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.203622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.203881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.203912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.204231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.204262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.204442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.204472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.204754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.204786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.204966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.204997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.205299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.205330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.205555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.205586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.205954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.206022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.206268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.206302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.206630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.206663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.206888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.206920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.207084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.207115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.207345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.207377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.207631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.207663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.207890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.207922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.358 [2024-07-24 19:07:01.208146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.358 [2024-07-24 19:07:01.208177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:16.358 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.208408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.208430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.208627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.208648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.208948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.208979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.209290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.209320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.209537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.209559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.209774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.209806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.210058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.210089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.210260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.210290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.210561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.210581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.210820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.210840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.211124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.211156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.211387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.211417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.211635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.211668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.211951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.211982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.212315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.212345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.212626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.212658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.212895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.212927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.213207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.213245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.213480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.213512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.213726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.213757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.213981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.214013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.214240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.214270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.214492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.214523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.214677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.214709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.215016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.215046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.215368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.215399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.215648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.215680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.215922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.215952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.216286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.216318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.216484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.216515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.216796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.216828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.217064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.217095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.217325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.217356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.217681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.217712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.217956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.217986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.218292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.218323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.359 [2024-07-24 19:07:01.218619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.359 [2024-07-24 19:07:01.218651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.359 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.218874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.218906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.219191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.219222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.219387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.219418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.219669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.219701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.219857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.219888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.220019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.220038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.220255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.220285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.220435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.220466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.220702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.220735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.220984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.221015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.221252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.221272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.221389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.221409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.221614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.221634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.221745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.221789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.222032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.222063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.222365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.222396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.222676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.222707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.222945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.222975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.223191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.223210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.223348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.223366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.223643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.223663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.223805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.223824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.223973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.224004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.224243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.224274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.224463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.224494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.224777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.224809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.225032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.225063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.225272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.225303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.225524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.225544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.225693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.225713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.225922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.225953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.226231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.226263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.226542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.226572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.226886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.226918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.227146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.227181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.227437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.227468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.360 qpair failed and we were unable to recover it. 00:30:16.360 [2024-07-24 19:07:01.227675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.360 [2024-07-24 19:07:01.227694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.227952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.227983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.228196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.228215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.228496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.228516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.228812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.228844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.229141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.229171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.229394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.229425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.229733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.229766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.230047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.230078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.230372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.230391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.230612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.230632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.230831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.230850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.231054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.231073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.231363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.231394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.231617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.231649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.231806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.231838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.232143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.232175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.232317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.232348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.232501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.232531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.232753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.232785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.233006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.233037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.233262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.233294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.233600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.233638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.233799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.233830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.234140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.234171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.234397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.234427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.234734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.234766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.234997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.235028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.235347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.235378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.235589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.235627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.235935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.235966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.236129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.236160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.236406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.236437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.236665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.236685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.236881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.236900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.237048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.237068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.237258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.237288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.361 qpair failed and we were unable to recover it. 00:30:16.361 [2024-07-24 19:07:01.237438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.361 [2024-07-24 19:07:01.237469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.237802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.237839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.238125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.238155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.238397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.238416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.238556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.238575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.238837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.238856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.239078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.239109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.239397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.239429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.239673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.239693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.239867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.239897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.240108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.240139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.240290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.240322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.240614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.240634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.240835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.240854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.241140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.241160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.241379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.241399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.241600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.241626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.241800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.241819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.242075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.242094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.242357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.242377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.242607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.242628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.242772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.242791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.243050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.243080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.243294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.243324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.243544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.243575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.243763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.243795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.244105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.244135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.244443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.244473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.244736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.244768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.244930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.244961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.245296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.245327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.362 [2024-07-24 19:07:01.245541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.362 [2024-07-24 19:07:01.245572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.362 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.245752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.245783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.245998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.246029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.246356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.246386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.246541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.246573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.246895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.246927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.247151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.247182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.247397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.247428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.247573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.247622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.247848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.247880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.248115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.248138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.248392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.248411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.248706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.248738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.248964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.248995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.249330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.249362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.249670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.249702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.249963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.249994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.250197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.250217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.250385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.250405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.250610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.250642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.250801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.250833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.251013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.251043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.251357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.251390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.251697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.251729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.251966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.251997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.252148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.252179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.252404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.252435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.252657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.252678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.252934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.252970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.253263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.253294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.253576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.253629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.253788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.253819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.254101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.254132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.254350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.254381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.254616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.254649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.254818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.254849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.255129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.255160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.255383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.363 [2024-07-24 19:07:01.255414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.363 qpair failed and we were unable to recover it. 00:30:16.363 [2024-07-24 19:07:01.255735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.255767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.255940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.255959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.256244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.256273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.256555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.256586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.256893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.256924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.257139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.257169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.257461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.257492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.257823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.257854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.258135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.258166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.258378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.258408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.258730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.258767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.259011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.259043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.259201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.259237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.259546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.259589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.259855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.259875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.260006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.260025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.260241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.260272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.260573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.260610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.260755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.260774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.260889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.260908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.261164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.261201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.261454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.261486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.261730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.261762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.262043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.262074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.262295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.262325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.262534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.262564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.262863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.262895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.263064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.263094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.263266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.263286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.263570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.263589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.263771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.263790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.263926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.263957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.264274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.264305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.264552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.264571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.264836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.264868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.265093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.265124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.364 [2024-07-24 19:07:01.265295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.364 [2024-07-24 19:07:01.265314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.364 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.265446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.265465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.265672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.265703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.266004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.266036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.266287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.266319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.266548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.266579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.266799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.266831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.267048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.267079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.267308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.267327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.267528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.267548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.267744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.267763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.268086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.268127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.268376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.268408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.268634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.268666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.268877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.268908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.269134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.269164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.269345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.269367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.269685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.269717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.270026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.270057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.270287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.270318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.270570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.270590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.270792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.270812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.270997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.271016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.271306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.271337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.271637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.271670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.271895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.271927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.272177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.272208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.272432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.272451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.272707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.272727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.273016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.273047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.273336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.273367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.273579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.273620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.273901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.273932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.274141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.274173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.274391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.274411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.274595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.274620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.274840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.274871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.365 [2024-07-24 19:07:01.275095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.365 [2024-07-24 19:07:01.275125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.365 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.275377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.275408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.275638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.275670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.275819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.275851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.276134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.276164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.276444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.276463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.276759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.276790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.277070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.277100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.277413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.277444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.277755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.277786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.278069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.278100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.278322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.278352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.278636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.278667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.278900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.278931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.279231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.279263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.279475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.279506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.279737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.279769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.280023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.280054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.280334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.280365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.280697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.280734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.280896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.280927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.281163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.281194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.281418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.281449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.281705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.281736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.282042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.282073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.282356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.282388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.282635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.282655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.282920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.282951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.283257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.283289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.283437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.283456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.283683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.283702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.283964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.283995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.284287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.284318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.284616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.284636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.284903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.284949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.285177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-24 19:07:01.285208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.366 qpair failed and we were unable to recover it. 00:30:16.366 [2024-07-24 19:07:01.285486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.285517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.285798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.285831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.286124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.286155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.286459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.286490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.286773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.286805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.286981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.287012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.287294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.287324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.287607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.287627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.287777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.287796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.288020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.288050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.288338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.288369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.288613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.288646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.288863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.288894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.289121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.289152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.289387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.289420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.289651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.289671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.289928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.289947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.290153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.290173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.290375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.290394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.290660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.290679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.290817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.290836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.290981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.290999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.291217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.291249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.291496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.291532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.291757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.291777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.291981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.292000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.292202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.292222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.292501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.292520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.292726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-24 19:07:01.292746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.367 qpair failed and we were unable to recover it. 00:30:16.367 [2024-07-24 19:07:01.293058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.293088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.293277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.293307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.293468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.293487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.293717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.293737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.293938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.293957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.294157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.294176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.294306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.294325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.294521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.294553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.294787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.294819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.295077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.295108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.295226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.295266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.295508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.295527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.295740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.295759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.295950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.295969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.296155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.296174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.296374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.296393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.296671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.296690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.296919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.296938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.297224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.297255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.297588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.297613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.297865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.297884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.298107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.298126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.298252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.298271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.298475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.298494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.298650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.298669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.298875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.298894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.299095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.299115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.299297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.299316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.299577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.299615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.299755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.299785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.300005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.300036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.300342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.300374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.300674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.300706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.301014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.301045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.368 qpair failed and we were unable to recover it. 00:30:16.368 [2024-07-24 19:07:01.301257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-24 19:07:01.301294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.301548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.301578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.301866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.301898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.302178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.302208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.302450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.302480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.302754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.302773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.303028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.303047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.303255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.303286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.303503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.303533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.303749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.303791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.303984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.304003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.304195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.304214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.304406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.304424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.304657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.304677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.304865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.304884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.305044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.305073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.305284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.305314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.305489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.305519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.305825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.305857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.306014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.306044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.306361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.306381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.306681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.306711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.307018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.307049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.307297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.307328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.307553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.307571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.307801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.307821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.308075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.308095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.308299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.308332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.308543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.308574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.308805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.308837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.309118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.309150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.309401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.309442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.309650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.309670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.309926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.309945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.310167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.369 [2024-07-24 19:07:01.310187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.369 qpair failed and we were unable to recover it. 00:30:16.369 [2024-07-24 19:07:01.310303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.310322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.310612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.310644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.310885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.310916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.311125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.311155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.311371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.311390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.311487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.311507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.311635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.311655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.311824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.311843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.312029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.312064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.312399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.312430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.312640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.312671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.312920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.312950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.313180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.313211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.313357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.313377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.313630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.313650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.313822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.313841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.314036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.314054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.314247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.314267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.314572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.314592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.314722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.314742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.314965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.314984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.315216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.315235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.315412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.315431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.315705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.315725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.315913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.315932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.316126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.316144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.316344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.316363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.316495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.316514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.316850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.316870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.317156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.317187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.317361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.317393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.317552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.317583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.317819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.370 [2024-07-24 19:07:01.317839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.370 qpair failed and we were unable to recover it. 00:30:16.370 [2024-07-24 19:07:01.318047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.318066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.318251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.318269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.318475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.318494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.318694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.318714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.318940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.318960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.319106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.319125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.319322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.319343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.319489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.319509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.319792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.319812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.320041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.320061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.320248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.320267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.320407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.320426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.320695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.320719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.320920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.320939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.321122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.321142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.321325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.321344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.321547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.321566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.321831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.321863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.322171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.322202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.322437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.322456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.322651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.322670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.322892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.322911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.323110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.323143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.323421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.323452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.323785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.323805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.324018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.324037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.324230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.324249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.324473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.324492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.324662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.324682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.324887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.324917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.325137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.325167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.325493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.325524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.325780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.325800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.325977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.325996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.371 [2024-07-24 19:07:01.326197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.371 [2024-07-24 19:07:01.326216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.371 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.326436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.326454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.326738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.326758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.327037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.327057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.327335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.327355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.327631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.327651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.327784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.327804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.328060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.328080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.328291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.328311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.328507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.328525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.328654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.328673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.328865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.328885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.329083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.329103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.329286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.329305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.329517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.329536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.329781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.329801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.330095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.330126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.330302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.330333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.330543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.330579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.330828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.330848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.330980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.330999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.331212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.331231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.331416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.331435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.331655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.331676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.331893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.331924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.332136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.332166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.332441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.332460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.332590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.332614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.332809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.332829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.333031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.372 [2024-07-24 19:07:01.333050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.372 qpair failed and we were unable to recover it. 00:30:16.372 [2024-07-24 19:07:01.333251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.333270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.373 [2024-07-24 19:07:01.333397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.333417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.373 [2024-07-24 19:07:01.333552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.333572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.373 [2024-07-24 19:07:01.333765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.333784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.373 [2024-07-24 19:07:01.334079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.334099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.373 [2024-07-24 19:07:01.334233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.334252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.373 [2024-07-24 19:07:01.334449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.334469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.373 [2024-07-24 19:07:01.334670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.334702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.373 [2024-07-24 19:07:01.334914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.334945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.373 [2024-07-24 19:07:01.335166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.335198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.373 [2024-07-24 19:07:01.335478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.373 [2024-07-24 19:07:01.335509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.373 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.335817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.335850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.336102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.336121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.336263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.336282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.336461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.336481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.336761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.336781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.337040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.337059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.337322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.337342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.337442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.337461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.337662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.337681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.337868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.337888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.338017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.338036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.338259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.338279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.338407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.338426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.338709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.338729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.338917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.338937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.339049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.339068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.339198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.339217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.339348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.339373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.339573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.339592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.339855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.339874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.340147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.340167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.340362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.340381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.340686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.340718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.340949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.340980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.341151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.341181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.341478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.341497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.341649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.341669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.341938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.341958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.342231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.342261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.342545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.655 [2024-07-24 19:07:01.342575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.655 qpair failed and we were unable to recover it. 00:30:16.655 [2024-07-24 19:07:01.342803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.342822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.342958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.342978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.343259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.343278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.343452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.343471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.343753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.343773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.343948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.343968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.344164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.344183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.344465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.344484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.344614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.344633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.344820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.344840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.344957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.344977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.345102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.345120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.345266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.345285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.345547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.345579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.345806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.345837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.346052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.346082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.346312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.346342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.346624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.346657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.346860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.346880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.347094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.347125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.347290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.347322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.347589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.347628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.347947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.347978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.348262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.348292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.348501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.348532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.348861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.348892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.349142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.349173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.349477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.349507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.349814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.349845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.350146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.350177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.350319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.350350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.350662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.350683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.350948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.350983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.351236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.351267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.656 [2024-07-24 19:07:01.351570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.656 [2024-07-24 19:07:01.351608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.656 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.351771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.351813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.352037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.352057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.352254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.352274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.352478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.352498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.352642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.352661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.352985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.353003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.353235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.353254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.353478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.353498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.353767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.353787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.353930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.353950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.354236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.354255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.354388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.354408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.354514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.354532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.354832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.354864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.355172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.355202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.355434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.355465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.355686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.355717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.356017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.356037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.356222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.356241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.356440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.356463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.356724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.356744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.356966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.356985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.357207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.357226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.357413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.357431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.357618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.357637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.357876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.357896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.358157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.358176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.358431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.358450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.358651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.358672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.358804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.358823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.359080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.359099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.359384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.359404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.359677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.359697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.359985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.360007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.360193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.360213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.360466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.657 [2024-07-24 19:07:01.360485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.657 qpair failed and we were unable to recover it. 00:30:16.657 [2024-07-24 19:07:01.360766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.360786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.360987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.361007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.361271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.361290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.361571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.361590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.361804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.361825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.362019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.362038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.362233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.362251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.362407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.362425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.362617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.362636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.362857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.362877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.363168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.363200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.363422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.363453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.363740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.363760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.363929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.363949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.364132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.364151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.364447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.364466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.364669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.364688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.365013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.365033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.365294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.365313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.365572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.365591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.365800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.365819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.366144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.366175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.366366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.366396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.366705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.366742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.366899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.366918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.367228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.367247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.367448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.367467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.367745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.367765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.367949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.367968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.368152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.368172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.368325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.368344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.368566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.368585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.368853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.368872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.369056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.369075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.369282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.369302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.369494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.658 [2024-07-24 19:07:01.369534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.658 qpair failed and we were unable to recover it. 00:30:16.658 [2024-07-24 19:07:01.369787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.369819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.369944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.369975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.370210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.370241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.370556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.370587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.370820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.370851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.371161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.371193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.371431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.371462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.371730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.371750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.371938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.371957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.372185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.372204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.372338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.372358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.372504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.372523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.372643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.372663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.372923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.372954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.373118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.373149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.373376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.373407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.373617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.373649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.373901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.373940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.374145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.374164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.374362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.374382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.374550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.374568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.374849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.374881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.375110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.375141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.375435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.375465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.375744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.375776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.376070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.376089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.376199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.376219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.376535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.376571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.376744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.376775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.376986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.377017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.377266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.377298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.659 [2024-07-24 19:07:01.377579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.659 [2024-07-24 19:07:01.377618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.659 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.377800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.377820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.377965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.377984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.378266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.378285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.378583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.378622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.378862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.378894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.379093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.379113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.379396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.379415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.379600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.379625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.379881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.379900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.380108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.380139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.380421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.380452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.380758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.380778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.381040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.381059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.381194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.381213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.381405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.381424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.381626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.381646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.381903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.381935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.382107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.382140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.382295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.382325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.382490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.382522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.382839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.382859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.383115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.383133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.383377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.383397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.383659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.383679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.383892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.383911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.384098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.384118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.384346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.384366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.384650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.384670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.384875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.384895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.385099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.385118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.385250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.385269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.385458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.385477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.385664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.385684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.385886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.385905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.660 qpair failed and we were unable to recover it. 00:30:16.660 [2024-07-24 19:07:01.386057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.660 [2024-07-24 19:07:01.386076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.386219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.386241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.386389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.386409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.386707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.386738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.386889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.386920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.387162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.387193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.387478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.387498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.387695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.387715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.388023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.388054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.388223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.388255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.388566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.388597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.388875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.388894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.389151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.389171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.389384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.389419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.389704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.389736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.390059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.390090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.390251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.390283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.390476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.390506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.390713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.390746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.390987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.391017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.391243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.391274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.391565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.391584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.391781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.391800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.391991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.392011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.392248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.392268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.392480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.392500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.392701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.392721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.392976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.392995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.393198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.393217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.393340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.393371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.393650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.393683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.393918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.393950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.394265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.394295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.394611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.394643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.394873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.394903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.661 qpair failed and we were unable to recover it. 00:30:16.661 [2024-07-24 19:07:01.395198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.661 [2024-07-24 19:07:01.395217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.395485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.395504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.395597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.395620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.395913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.395932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.396062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.396082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.396288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.396306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.396512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.396534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.396816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.396836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.397034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.397053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.397309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.397328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.397581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.397600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.397883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.397902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.398190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.398209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.398396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.398427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.398653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.398686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.398898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.398930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.399231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.399250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.399454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.399474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.399591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.399616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.399839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.399859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.400011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.400030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.400225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.400244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.400451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.400471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.400740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.400760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.400966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.400986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.401133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.401152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.401338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.401357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.401502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.401521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.401727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.401760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.401884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.401915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.402136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.402166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.402314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.402344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.402622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.402641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.402858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.402877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.403166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.403198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.403445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.403476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.403655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.403675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.662 [2024-07-24 19:07:01.403939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.662 [2024-07-24 19:07:01.403959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.662 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.404144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.404162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.404362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.404381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.404523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.404543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.404865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.404885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.405081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.405100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.405327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.405358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.405692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.405723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.405884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.405904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.406110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.406133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.406390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.406410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.406529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.406548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.406752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.406771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.406971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.406990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.407106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.407126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.407259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.407278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.407534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.407554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.407681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.407701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.407908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.407939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.408278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.408309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.408467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.408499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.408730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.408763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.409052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.409072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.409225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.409256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.409451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.409483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.409757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.409789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.410024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.410055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.410279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.410310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.410545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.410575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.410740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.410772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.411013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.411033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.411243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.411274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.411433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.411464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.411687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.411707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.663 qpair failed and we were unable to recover it. 00:30:16.663 [2024-07-24 19:07:01.411856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.663 [2024-07-24 19:07:01.411875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.412018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.412038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.412235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.412254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.412455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.412475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.412733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.412753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.412962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.412981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.413224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.413254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.413512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.413543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.413830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.413861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.414021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.414051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.414193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.414224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.414510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.414539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.414762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.414794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.415050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.415081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.415229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.415248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.415457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.415480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.415687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.415707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.415908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.415928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.416171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.416203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.416413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.416442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.416729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.416749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.417004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.417040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.417269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.417299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.417507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.417538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.417721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.417753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.418032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.418063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.418216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.418247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.418547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.418578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.664 qpair failed and we were unable to recover it. 00:30:16.664 [2024-07-24 19:07:01.418929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.664 [2024-07-24 19:07:01.418960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.419275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.419306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.419533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.419564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.419875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.419894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.420120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.420139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.420432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.420463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.420722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.420753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.420979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.421010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.421176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.421208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.421424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.421455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.421634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.421666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.421817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.421847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.422141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.422173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.422398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.422429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.422665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.422697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.422923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.422954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.423115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.423135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.423397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.423428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.423639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.423671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.423894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.423913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.424133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.424153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.424445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.424476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.424698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.424731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.424961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.425000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.425268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.425287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.425541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.425561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.425744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.425764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.426035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.426072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.426285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.426315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.426535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.426566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.426854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.426890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.427195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.427225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.427553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.427585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.427847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.427879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.428189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.428221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.428530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.665 [2024-07-24 19:07:01.428562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.665 qpair failed and we were unable to recover it. 00:30:16.665 [2024-07-24 19:07:01.428855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.428887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.429140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.429171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.429456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.429487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.429718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.429751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.429969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.429999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.430236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.430268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.430494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.430525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.430769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.430813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.431162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.431193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.431501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.431532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.431691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.431723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.431943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.431962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.432171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.432190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.432397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.432417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.432700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.432720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.432922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.432941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.433147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.433167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.433292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.433311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.433531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.433551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.433757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.433789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.434003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.434035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.434341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.434372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.434664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.434706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.434909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.434929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.435141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.435160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.435344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.435363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.435556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.435575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.435698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.435718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.435995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.436015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.436225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.436244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.436520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.436551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.436847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.436884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.437048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.437079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.437230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.437249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.437428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.437458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.437624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.437656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-24 19:07:01.437873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.666 [2024-07-24 19:07:01.437905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.438234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.438265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.438545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.438584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.438844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.438864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.439119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.439140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.439344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.439374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.439634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.439667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.439950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.439981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.440284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.440315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.440608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.440641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.440855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.440886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.441127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.441147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.441346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.441366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.441588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.441612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.441813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.441832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.442082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.442112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.442417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.442448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.442771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.442803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.443028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.443059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.443284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.443304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.443613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.443632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.443837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.443857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.443994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.444014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.444284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.444316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.444628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.444660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.444827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.444858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.445107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.445138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.445310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.445341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.445622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.445654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.445952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.445983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.446159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.446191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.446353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.446385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.446617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.446649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.446875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.446906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-24 19:07:01.447169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.667 [2024-07-24 19:07:01.447200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.447482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.447518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.447827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.447859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.448074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.448105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.448334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.448365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.448588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.448636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.448939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.448958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.449161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.449181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.449431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.449466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.449721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.449752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.449908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.449938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.450186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.450206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.450407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.450438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.450664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.450683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.450938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.450957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.451164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.451183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.451326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.451344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.451597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.451623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.451828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.451847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.452089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.452108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.452312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.452343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.452581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.452621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.452848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.452879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.453158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.453189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.453419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.453450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.453676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.453708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.453863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.453895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.454197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.454217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.454440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.454460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.454722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.454764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.454924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.454955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.455115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.455147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.455301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.455331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.455540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.455572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.455877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.455897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.456039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.668 [2024-07-24 19:07:01.456058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-24 19:07:01.456205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.456225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.456510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.456541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.456768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.456799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.457053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.457084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.457481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.457513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.457728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.457766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.458054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.458084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.458332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.458363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.458625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.458657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.458820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.458851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.459190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.459225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.459452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.459484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.459707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.459728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.459980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.459999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.460258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.460299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.460534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.460565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.460866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.460898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.461139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.461170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.461332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.461363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.461648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.461681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.461990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.462020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.462247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.462278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.462511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.462542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.462787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.462819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.463147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.463177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.463510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.463541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.463877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.463897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.464113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.464132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.464362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.464382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.464526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.464546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.669 [2024-07-24 19:07:01.464745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.669 [2024-07-24 19:07:01.464766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.669 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.464966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.464997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.465186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x55de80 is same with the state(6) to be set 00:30:16.670 [2024-07-24 19:07:01.465498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.465567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.465947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.465983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.466238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.466271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.466509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.466530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.466815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.466834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.467092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.467112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.467369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.467400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.467558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.467589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.467808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.467839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.467995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.468027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.468306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.468338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.468548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.468578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.468841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.468872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.469134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.469166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.469389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.469420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.469643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.469675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.469962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.469994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.470266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.470285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.470538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.470558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.470744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.470764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.471071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.471101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.471354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.471385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.471698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.471729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.471887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.471918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.472222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.472253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.472548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.472579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.472813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.472850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.473018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.473049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.473286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.473317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.473600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.473640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.473859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.670 [2024-07-24 19:07:01.473890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.670 qpair failed and we were unable to recover it. 00:30:16.670 [2024-07-24 19:07:01.474123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.474154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.474348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.474379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.474688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.474726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.474928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.474947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.475170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.475189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.475336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.475355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.475560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.475579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.475791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.475812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.476017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.476048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.476263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.476294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.476612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.476644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.476874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.476905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.477068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.477099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.477410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.477442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.477619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.477650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.477861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.477893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.478065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.478084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.478370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.478401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.478626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.478659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.478875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.478894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.479174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.479206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.479440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.479472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.479729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.479762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.479993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.480035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.480295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.480315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.480609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.480641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.480993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.481024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.481244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.481274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.481583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.481623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.481902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.481933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.482155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.482186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.482399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.482430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.482741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.482762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.483059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.483089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.483425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.483455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.671 qpair failed and we were unable to recover it. 00:30:16.671 [2024-07-24 19:07:01.483669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.671 [2024-07-24 19:07:01.483707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.483934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.483978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.484239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.484280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.484589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.484627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.484908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.484940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.485216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.485247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.485560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.485591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.485857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.485889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.486194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.486225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.486398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.486428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.486650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.486682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.486901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.486920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.487055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.487084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.487363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.487394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.487646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.487678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.487931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.487962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.488122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.488153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.488466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.488497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.488745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.488776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.489034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.489066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.489280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.489311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.489488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.489531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.489715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.489735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.489851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.489870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.490130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.490169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.490384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.490415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.490654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.490687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.491021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.491053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.491292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.491322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.491499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.491530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.491768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.491788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.491991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.492022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.492173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.492204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.492428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.492459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.492684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.492716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.492946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.492977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.493223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.493242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.672 [2024-07-24 19:07:01.493425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.672 [2024-07-24 19:07:01.493444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.672 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.493649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.493669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.493954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.493974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.494265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.494287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.494578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.494617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.494870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.494900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.495183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.495214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.495494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.495525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.495773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.495805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.495929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.495961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.496254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.496285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.496565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.496596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.496776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.496795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.497059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.497090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.497325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.497356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.497661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.497693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.497916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.497947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.498256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.498288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.498462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.498493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.498795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.498827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.499067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.499086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.499341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.499377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.499597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.499637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.499883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.499903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.500214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.500246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.500472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.500502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.500784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.500817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.501070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.501101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.501325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.501356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.501637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.501669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.501952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.501984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.502206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.502237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.502458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.502489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.502712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.502744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.502973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.502992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.503189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.503208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.503405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.503425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.503655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.673 [2024-07-24 19:07:01.503674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.673 qpair failed and we were unable to recover it. 00:30:16.673 [2024-07-24 19:07:01.503889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.503909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.504109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.504129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.504326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.504345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.504553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.504572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.504781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.504802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.505064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.505108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.505323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.505354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.505578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.505619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.505830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.505860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.506079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.506098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.506380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.506399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.506590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.506616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.506766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.506785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.506922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.506941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.507142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.507174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.507391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.507422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.507635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.507667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.507993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.508024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.508309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.508339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.508601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.508649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.508931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.508962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.509278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.509309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.509538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.509568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.509728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.509760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.510095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.510126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.510362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.510393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.510682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.510702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.510914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.674 [2024-07-24 19:07:01.510945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.674 qpair failed and we were unable to recover it. 00:30:16.674 [2024-07-24 19:07:01.511191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.511222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.511443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.511474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.511693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.511725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.511956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.511976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.512178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.512199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.512404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.512424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.512626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.512646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.512785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.512805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.513092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.513123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.513346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.513377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.513616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.513649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.513892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.513912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.514105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.514125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.514270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.514290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.514440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.514460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.514732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.514765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.515045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.515077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.515250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.515287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.515525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.515556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.515787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.515819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.516044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.516063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.516258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.516277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.516544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.516588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.516895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.516926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.517234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.517266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.517573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.517613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.517769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.517800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.518021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.518052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.518261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.518293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.518600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.518640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.518995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.519026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.519261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.519281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.519396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.519416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.519631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.519664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.519945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.519976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.520202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.520232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.675 qpair failed and we were unable to recover it. 00:30:16.675 [2024-07-24 19:07:01.520393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.675 [2024-07-24 19:07:01.520424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.520735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.520767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.521004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.521023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.521156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.521175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.521375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.521394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.521581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.521600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.521802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.521821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.522040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.522071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.522415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.522486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.522764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.522801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.523030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.523061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.523299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.523330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.523570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.523601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.523847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.523878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.524082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.524104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.524327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.524347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.524514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.524545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.524868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.524901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.525293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.525312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.525533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.525552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.525813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.525833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.525962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.525981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.526142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.526174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.526487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.526517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.526751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.526783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.527103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.527134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.527387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.527418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.527640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.527673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.527900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.527931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.528143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.528163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.528446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.528465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.528672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.528692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.528959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.528990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.529219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.529250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.529475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.529506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.529733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.529765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.676 qpair failed and we were unable to recover it. 00:30:16.676 [2024-07-24 19:07:01.529989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.676 [2024-07-24 19:07:01.530020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.530179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.530198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.530459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.530490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.530718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.530749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.530968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.530998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.531226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.531257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.531432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.531463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.531612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.531644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.531821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.531852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.532187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.532208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.532340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.532358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.532561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.532581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.532884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.532922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.533206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.533237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.533478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.533509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.533725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.533757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.534001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.534041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.534229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.534248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.534433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.534452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.534654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.534673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.534805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.534824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.535109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.535140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.535387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.535417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.535731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.535751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.536016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.536047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.536262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.536293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.536534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.536565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.536853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.536885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.537176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.537207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.537440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.537471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.537644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.537676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.537925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.537944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.677 [2024-07-24 19:07:01.538097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.677 [2024-07-24 19:07:01.538116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.677 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.538255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.538273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.538472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.538490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.538679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.538699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.538885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.538904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.539124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.539155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.539382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.539413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.539647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.539682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.539877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.539909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.540071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.540102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.540325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.540345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.540626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.540645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.540780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.540799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.540989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.541008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.541158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.541188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.541415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.541446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.541752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.541784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.542091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.542122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.542267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.542286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.542479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.542524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.542678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.542716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.542937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.542968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.543179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.543199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.543464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.543495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.543746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.543778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.544002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.544022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.544303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.544323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.544463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.544483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.544611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.544631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.544832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.544852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.545057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.545077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.545211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.545230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.545443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.545474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.545781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.545813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.545986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.546006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.546264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.546294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.546526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.546558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.546814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.678 [2024-07-24 19:07:01.546846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.678 qpair failed and we were unable to recover it. 00:30:16.678 [2024-07-24 19:07:01.547151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.547171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.547360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.547379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.547508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.547527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.547785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.547805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.548036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.548066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.548311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.548342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.548557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.548576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.548778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.548798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.548997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.549028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.549180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.549210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.549373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.549405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.549624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.549656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.549934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.549954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.550240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.550259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.550445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.550464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.550595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.550621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.550815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.550835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.550975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.550994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.551259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.551290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.551509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.551540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.551742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.551775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.551991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.552010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.552303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.552326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.552550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.552569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.552771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.552791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.552998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.553018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.553151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.553171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.553307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.553326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.553523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.679 [2024-07-24 19:07:01.553543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.679 qpair failed and we were unable to recover it. 00:30:16.679 [2024-07-24 19:07:01.553744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.553764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.553956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.553975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.554099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.554116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.554371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.554390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.554647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.554666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.554924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.554943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.555071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.555091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.555290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.555310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.555443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.555463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.555597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.555624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.555811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.555831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.556037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.556069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.556297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.556328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.556478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.556508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.556728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.556759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.556970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.557001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.557172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.557203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.557419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.557451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.557678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.557710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.557921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.557952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.558291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.558359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.558519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.558554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.558686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.558718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.559013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.559035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.559234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.559253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.559520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.559565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.559727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.559759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.560035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.560054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.560207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.560239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.560458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.560490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.560720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.560752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.560994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.561025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.561236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.561266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.561452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.561475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.561706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.561725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.680 [2024-07-24 19:07:01.562006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.680 [2024-07-24 19:07:01.562046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.680 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.562193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.562224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.562522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.562552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.562679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.562710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.563018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.563037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.563169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.563188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.563315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.563334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.563589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.563614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.563888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.563908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.564113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.564133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.564336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.564355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.564479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.564499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.564690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.564710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.564991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.565011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.565270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.565289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.565493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.565513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.565680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.565700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.565812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.565832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.566020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.566039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.566157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.566176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.566411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.566431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.566648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.566667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.566854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.566874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.566985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.567005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.567206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.567225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.567374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.567394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.567653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.567672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.567870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.567893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.568068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.568099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.568267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.568297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.568458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.568488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.568724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.681 [2024-07-24 19:07:01.568757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.681 qpair failed and we were unable to recover it. 00:30:16.681 [2024-07-24 19:07:01.568883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.568915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.569234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.569266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.569478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.569509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.569670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.569702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.569925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.569956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.570250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.570270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.570364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.570385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.570642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.570662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.570919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.570939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.571128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.571147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.571335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.571355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.571565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.571584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.571854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.571874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.572037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.572057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.572310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.572346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.572571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.572626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.572935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.572967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.573246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.573266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.573471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.573490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.573707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.573727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.573858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.573877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.574005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.574024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.574158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.574177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.574310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.574329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.574587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.574611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.574796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.574816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.574992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.575013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.575130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.575148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.575428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.575448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.575781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.575801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.576000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.576020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.576301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.576333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.576505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.576536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.576851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.576883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.577102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.577121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.577264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.577284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.682 [2024-07-24 19:07:01.577493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.682 [2024-07-24 19:07:01.577525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.682 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.577809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.577841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.578006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.578037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.578333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.578364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.578517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.578548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.578800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.578832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.579005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.579035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.579266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.579298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.579508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.579539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.579769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.579789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.579977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.579999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.580142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.580174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.580398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.580429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.580663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.580694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.580843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.580873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.581012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.581042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.581292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.581311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.581518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.581538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.581758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.581777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.582062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.582081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.582290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.582309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.582458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.582477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.582661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.582681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.582911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.582930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.583191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.583211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.583344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.583363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.583620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.583641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.583786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.583805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.583993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.584024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.584217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.584248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.584422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.584451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.584653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.584685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.584845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.584877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.585182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.585213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.585369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.585387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.585561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.585580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.683 [2024-07-24 19:07:01.585884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.683 [2024-07-24 19:07:01.585916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.683 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.586146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.586171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.586382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.586402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.586683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.586703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.586927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.586945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.587200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.587219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.587370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.587389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.587686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.587705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.587965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.587984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.588170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.588190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.588419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.588450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.588622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.588654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.588814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.588844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.589134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.589165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.589300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.589330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.589594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.589620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.589834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.589855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.590007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.590038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.590203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.590233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.590454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.590484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.590723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.590743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.591001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.591020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.591202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.591222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.591415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.591434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.591667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.591687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.591971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.591991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.592187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.592206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.592460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.592479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.592601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.684 [2024-07-24 19:07:01.592636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.684 qpair failed and we were unable to recover it. 00:30:16.684 [2024-07-24 19:07:01.592749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.592768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.592910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.592929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.593077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.593096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.593291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.593310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.593533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.593552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.593770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.593790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.593984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.594004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.594222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.594253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.594502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.594533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.594805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.594837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.595140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.595160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.595293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.595313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.595525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.595547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.595740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.595760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.596076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.596095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.596294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.596333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.596643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.596675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.596924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.596955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.597202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.597221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.597424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.597443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.597698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.597718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.597916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.597935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.598140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.598159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.598369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.598389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.598688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.598720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.598878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.598909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.599147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.599178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.599421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.599440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.599619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.599639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.599852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.599871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.600137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.600160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.600402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.600433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.600655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.600687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.600968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.601000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.601164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.601195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.601359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.601390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.601542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.685 [2024-07-24 19:07:01.601562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.685 qpair failed and we were unable to recover it. 00:30:16.685 [2024-07-24 19:07:01.601758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.601778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.601979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.601998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.602188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.602220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.602505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.602536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.602685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.602717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.602931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.602963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.603129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.603169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.603360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.603378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.603546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.603566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.603705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.603724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.603927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.603945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.604080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.604099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.604416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.604447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.604680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.604713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.604964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.604997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.605244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.605280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.605577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.605617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.605824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.605855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.606091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.606110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.606319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.606350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.606651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.606683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.606913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.606944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.607106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.607125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.607347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.607367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.607498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.607518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.607656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.607675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.608825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.608861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.609151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.609172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.609448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.609469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.609734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.609754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.609870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.609889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.610040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.610059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.610320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.610339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.610544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.610563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.610705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.686 [2024-07-24 19:07:01.610724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.686 qpair failed and we were unable to recover it. 00:30:16.686 [2024-07-24 19:07:01.610856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.610876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.611076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.611096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.611295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.611314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.611516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.611535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.611670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.611689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.611882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.611901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.612087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.612106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.612555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.612581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.612785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.612805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.613043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.613062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.613208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.613239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.613466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.613497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.613729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.613763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.614033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.614064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.614202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.614234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.614486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.614517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.614748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.614781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.615098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.615129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.615383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.615402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.615609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.615630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.615760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.615797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.616037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.616069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.616240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.616271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.616503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.616522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.616657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.616691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.616977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.617008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.617168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.617199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.617425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.617445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.617703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.617724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.617859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.617877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.618067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.618099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.618323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.618355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.618662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.618694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.618924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.618957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.619182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.619213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.619476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.619507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.619799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.687 [2024-07-24 19:07:01.619832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.687 qpair failed and we were unable to recover it. 00:30:16.687 [2024-07-24 19:07:01.620081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.620112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.620229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.620249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.620437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.620456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.620660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.620692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.620915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.620946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.621222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.621242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.621433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.621452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.621655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.621675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.621871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.621890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.622020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.622039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.622192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.622223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.622381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.622412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.622578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.622619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.622778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.622809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.623088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.623131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.623388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.623407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.623599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.623639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.623775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.623806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.624029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.624060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.624290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.624322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.624551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.624581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.624757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.624790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.625082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.625114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.625398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.625435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.625593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.625637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.625868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.625899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.626127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.626158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.626400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.626431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.626655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.626686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.626912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.626943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.627156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.627187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.627347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.688 [2024-07-24 19:07:01.627378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.688 qpair failed and we were unable to recover it. 00:30:16.688 [2024-07-24 19:07:01.627686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.627705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.627910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.627929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.628028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.628047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.628278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.628308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.628530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.628561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.628808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.628840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.629058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.629089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.629365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.629384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.629582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.629601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.629740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.629760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.629955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.629986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.630136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.630167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.630387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.630418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.630696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.630733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.630891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.630922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.631162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.631193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.631497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.631516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.631773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.631812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.631991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.632022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.632240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.632271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.632514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.632533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.632819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.632851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.633067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.633096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.633386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.633404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.633619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.633638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.633765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.633783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.634067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.634087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.634291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.634310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.634497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.634516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.634713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.634733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.634929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.689 [2024-07-24 19:07:01.634948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.689 qpair failed and we were unable to recover it. 00:30:16.689 [2024-07-24 19:07:01.635083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.635107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.635344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.635375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.635596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.635635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.635944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.635975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.636100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.636131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.636347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.636378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.636607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.636627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.636903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.636944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.637170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.637201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.637394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.637426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.637587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.637610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.637812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.637832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.638091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.638122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.638285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.638317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.638630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.638651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.638787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.638807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.638949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.638968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.639098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.639118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.639303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.639322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.639531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.639562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.639779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.639814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.640137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.640168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.640379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.640410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.690 [2024-07-24 19:07:01.640639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.690 [2024-07-24 19:07:01.640671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.690 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.640916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.640947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.641222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.641241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.641454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.641474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.641667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.641688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.641955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.641975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.642178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.642197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.642420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.642440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.642569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.642588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.642784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.642804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.642942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.642962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.643165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.643184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.643303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.643322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.643438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.643456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.643660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.643681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.973 qpair failed and we were unable to recover it. 00:30:16.973 [2024-07-24 19:07:01.643890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.973 [2024-07-24 19:07:01.643909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.644115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.644135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.644334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.644357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.644663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.644683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.644960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.644979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.645147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.645167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.645404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.645422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.645638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.645671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.645892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.645922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.646154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.646185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.646394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.646426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.646590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.646629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.646925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.646957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.647215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.647247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.647549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.647580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.647815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.647846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.648010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.648042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.648353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.648385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.648536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.648556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.648835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.648867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.649064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.649096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.649316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.649357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.649557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.649577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.649769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.649789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.650006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.650037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.650316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.650348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.650647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.650666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.650792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.650811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.650958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.650989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.651152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.651184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.651349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.651380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.651687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.651718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.652001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.652031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.652247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.652278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.652509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.652539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.974 qpair failed and we were unable to recover it. 00:30:16.974 [2024-07-24 19:07:01.652798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.974 [2024-07-24 19:07:01.652830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.653139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.653171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.653474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.653504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.653765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.653796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.654107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.654138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.654425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.654456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.654706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.654738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.654971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.655008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.655307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.655338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.655621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.655654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.655868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.655898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.656227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.656258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.656493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.656524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.656720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.656741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.656998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.657029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.657205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.657236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.657514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.657545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.657712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.657732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.658013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.658032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.658233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.658253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.658533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.658552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.658740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.658760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.658908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.658927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.659187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.659218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.659517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.659548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.659786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.659818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.660107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.660138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.660376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.660407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.660578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.660637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.660946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.660977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.661153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.661183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.661394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.661425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.661732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.661751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.975 qpair failed and we were unable to recover it. 00:30:16.975 [2024-07-24 19:07:01.661973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.975 [2024-07-24 19:07:01.661992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.662288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.662319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.662551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.662582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.662871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.662902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.663211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.663242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.663493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.663524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.663768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.663799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.664030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.664061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.664290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.664321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.664555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.664586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.664903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.664923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.665116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.665136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.665410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.665429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.665687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.665724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.665954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.665990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.666328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.666359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.666624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.666655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.666875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.666907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.667097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.667130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.667340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.667359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.667542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.667561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.667842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.667862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.668117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.668149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.668430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.668461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.668737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.668776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.669089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.669120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.669401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.669432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.669589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.669630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.669891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.669922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.670172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.670202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.670452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.670483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.670772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.670805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.671051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.671081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.671386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.671417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.671640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.671671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.671951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.671986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.672210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.976 [2024-07-24 19:07:01.672240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.976 qpair failed and we were unable to recover it. 00:30:16.976 [2024-07-24 19:07:01.672547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.672579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.672843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.672874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.673019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.673038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.673187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.673206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.673401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.673432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.673713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.673745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.674028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.674059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.674286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.674317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.674539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.674570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.674801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.674833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.675114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.675145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.675388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.675420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.675632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.675664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.675895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.675926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.676073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.676104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.676247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.676279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.676511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.676542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.676761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.676809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.677093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.677123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.677469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.677501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.677732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.677764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.677978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.678009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.678165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.678197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.678475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.678506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.678785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.678805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.679062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.679082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.679278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.679297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.679515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.679534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.679750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.679770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.680070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.680101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.680333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.680364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.680706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.680738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.680974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.681005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.681178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.681209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.977 [2024-07-24 19:07:01.681372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.977 [2024-07-24 19:07:01.681402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.977 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.681621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.681641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.681771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.681791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.681991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.682022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.682301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.682339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.682576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.682595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.682859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.682899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.683125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.683157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.683393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.683412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.683617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.683648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.683811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.683843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.684067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.684098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.684347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.684379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.684555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.684586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.684858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.684890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.685200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.685231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.685396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.685427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.685655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.685687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.685935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.685966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.686271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.686303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.686545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.686576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.686805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.686838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.687005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.687037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.687242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.687264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.687552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.687571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.687710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.687730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.687940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.687971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.688192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.688223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.688445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.688476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.978 qpair failed and we were unable to recover it. 00:30:16.978 [2024-07-24 19:07:01.688763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.978 [2024-07-24 19:07:01.688800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.689112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.689144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.689361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.689392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.689671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.689702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.689990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.690025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.690384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.690416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.690633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.690665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.690957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.690988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.691203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.691234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.691549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.691581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.691908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.691940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.692178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.692209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.692552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.692583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.692921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.692953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.693191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.693221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.693445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.693465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.693733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.693754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.693888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.693908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.694111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.694143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.694359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.694389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.694673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.694705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.694856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.694888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.695175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.695206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.695417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.695448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.695676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.695708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.696014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.696044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.696328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.696359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.696623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.696643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.696939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.696958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.697182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.697201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.697352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.697371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.697492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.697511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.697790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.697810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.698029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.698060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.698248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.698284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.698592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.979 [2024-07-24 19:07:01.698633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.979 qpair failed and we were unable to recover it. 00:30:16.979 [2024-07-24 19:07:01.698852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.698883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.699203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.699235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.699495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.699526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.699844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.699876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.700096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.700127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.700374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.700393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.700641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.700661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.700917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.700937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.701236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.701255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.701441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.701460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.701745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.701765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.701962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.701981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.702199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.702218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.702442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.702462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.702659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.702679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.702884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.702904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.703118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.703138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.703453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.703484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.703734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.703766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.704009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.704040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.704348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.704379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.704611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.704631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.704832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.704862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.705085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.705116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.705343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.705362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.705560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.705580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.705747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.705767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.705956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.705987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.706227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.706258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.706417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.706448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.706743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.706763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.707048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.707067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.707270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.707290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.707489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.707509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.707714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.707733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.707949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.707969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.980 [2024-07-24 19:07:01.708120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.980 [2024-07-24 19:07:01.708139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.980 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.708400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.708431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.708718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.708755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.709039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.709070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.709283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.709314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.709555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.709586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.709763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.709795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.710076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.710107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.710390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.710421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.710644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.710665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.710862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.710893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.711111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.711142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.711476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.711507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.711723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.711754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.712034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.712065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.712285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.712317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.712543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.712574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.712743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.712774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.712998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.713028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.713320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.713351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.713631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.713651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.713848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.713867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.714067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.714087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.714230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.714249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.714443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.714474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.714660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.714692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.714845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.714877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.715101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.715132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.715377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.715396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.715695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.715726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.715949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.715980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.716147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.716178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.716451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.716470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.716680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.716700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.716848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.716867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.716979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.716999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.717146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.717165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.981 [2024-07-24 19:07:01.717373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.981 [2024-07-24 19:07:01.717393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.981 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.717530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.717549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.717713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.717733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.718019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.718050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.718269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.718301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.718556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.718575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.718857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.718889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.719063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.719094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.719327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.719358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.719513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.719532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.719789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.719809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.720035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.720065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.720381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.720413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.720642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.720674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.720938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.720969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.721191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.721222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.721464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.721495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.721775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.721807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.722020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.722050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.722314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.722346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.722533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.722564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.722875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.722907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.723179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.723220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.723472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.723506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.723785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.723818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.724101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.724132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.724381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.724422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.724629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.724649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.724804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.724824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.725023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.725043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.725304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.725334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.982 qpair failed and we were unable to recover it. 00:30:16.982 [2024-07-24 19:07:01.725667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.982 [2024-07-24 19:07:01.725699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.725995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.726032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.726349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.726380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.726696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.726728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.726957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.726988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.727236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.727267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.727626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.727658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.727881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.727912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.728221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.728252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.728514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.728545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.728728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.728760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.728991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.729022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.729354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.729385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.729696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.729716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.729935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.729954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.730165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.730185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.730387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.730406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.730616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.730636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.730883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.730901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.731044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.731064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.731267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.731298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.731456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.731486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.731666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.731698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.731925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.731956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.732173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.732204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.732444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.732463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.732667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.732686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.732838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.732857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.733069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.733101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.733265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.733284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.733433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.983 [2024-07-24 19:07:01.733453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.983 qpair failed and we were unable to recover it. 00:30:16.983 [2024-07-24 19:07:01.733659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.733690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.733907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.733938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.734161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.734192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.734419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.734451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.734756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.734787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.735037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.735068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.735316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.735348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.735689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.735720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.735944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.735976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.736207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.736238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.736396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.736422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.736695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.736715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.736938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.736957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.737252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.737283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.737522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.737554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.737861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.737893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.738172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.738208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.738457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.738488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.738791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.738811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.739025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.739045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.739300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.739339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.739622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.739655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.739948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.739992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.740252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.740285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.740440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.740470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.740675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.740694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.740953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.740972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.741223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.741242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.741457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.741488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.741703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.741735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.741964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.984 [2024-07-24 19:07:01.741995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.984 qpair failed and we were unable to recover it. 00:30:16.984 [2024-07-24 19:07:01.742165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.742196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.742508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.742539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.742764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.742796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.743027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.743058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.743285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.743316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.743537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.743556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.743840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.743861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.744058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.744077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.744362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.744381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.744514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.744534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.744790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.744809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.745007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.745027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.745306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.745337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.745585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.745625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.745843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.745874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.746097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.746128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.746300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.746331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.746536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.746555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.746813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.746845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.747020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.747056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.747214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.747245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.747553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.747572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.747878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.747910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.748123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.748154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.748379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.748410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.748693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.748713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.748899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.748918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.749217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.749248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.749456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.749487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.749657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.749690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.749979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.750010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.750238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.750270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.750492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.750533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.750738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.750758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.750968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.750987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.751189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.751208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.751394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.751414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.985 [2024-07-24 19:07:01.751614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.985 [2024-07-24 19:07:01.751634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.985 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.751838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.751857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.752109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.752145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.752401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.752432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.752716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.752736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.752953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.752972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.753254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.753274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.753507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.753539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.753752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.753783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.754066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.754086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.754290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.754310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.754543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.754562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.754756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.754776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.755036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.755067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.755241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.755273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.755506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.755525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.755646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.755666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.755838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.755857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.756140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.756171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.756506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.756538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.756858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.756878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.757011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.757030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.757168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.757206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.757446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.757477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.757692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.757724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.757944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.757975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.758278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.758310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.758519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.758550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.758729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.758761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.758927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.758958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.759167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.759198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.759477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.986 [2024-07-24 19:07:01.759508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.986 qpair failed and we were unable to recover it. 00:30:16.986 [2024-07-24 19:07:01.759817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.759848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.760001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.760031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.760261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.760291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.760571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.760611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.760834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.760865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.761115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.761146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.761291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.761322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.761647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.761666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.761988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.762018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.762326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.762357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.762661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.762693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.762920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.762939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.763143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.763162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.763350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.763370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.763628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.763648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.763788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.763807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.764069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.764100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.764363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.764395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.764649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.764680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.764911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.764942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.765263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.765294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.765518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.765549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.765886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.765918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.766147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.766179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.766458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.766489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.766774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.766807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.767116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.767134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.767364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.767383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.767595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.767629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.767929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.767959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.768190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.768227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.768398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.768430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.768678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.768698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.768989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.769020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.769181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.769211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.769439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.769458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.769740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.769759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.987 [2024-07-24 19:07:01.769960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.987 [2024-07-24 19:07:01.769979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.987 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.770234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.770271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.770578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.770619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.770848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.770879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.771189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.771220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.771459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.771490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.771720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.771740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.771998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.772017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.772209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.772229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.772511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.772547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.772779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.772811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.773147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.773179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.773404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.773435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.773685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.773706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.773986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.774016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.774243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.774273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.774490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.774521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.774777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.774808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.775086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.775117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.775425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.775455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.775683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.775713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.775922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.775941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.776139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.776158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.776454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.776485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.776713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.776744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.776993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.777024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.777233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.777264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.988 qpair failed and we were unable to recover it. 00:30:16.988 [2024-07-24 19:07:01.777441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.988 [2024-07-24 19:07:01.777472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.777705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.777737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.777956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.777986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.778157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.778189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.778510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.778540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.778773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.778805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.779085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.779122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.779282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.779312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.779562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.779592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.779808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.779827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.780052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.780071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.780213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.780233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.780460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.780479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.780684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.780704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.780841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.780872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.781178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.781210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.781365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.781396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.781732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.781764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.782090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.782121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.782450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.782481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.782641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.782661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.782868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.782887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.783093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.783112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.783255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.783274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.783468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.783487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.783683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.783703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.783893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.783913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.784141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.784160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.784360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.784379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.784609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.784629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.784748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.784767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.785079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.785109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.785284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.785314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.785626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.785658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.785972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.786002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.786232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.786263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.786498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.786529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.786806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.989 [2024-07-24 19:07:01.786825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.989 qpair failed and we were unable to recover it. 00:30:16.989 [2024-07-24 19:07:01.787028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.787058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.787307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.787338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.787566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.787597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.787836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.787868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.788129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.788160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.788495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.788527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.788808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.788827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.789052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.789071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.789218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.789244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.789493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.789513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.789759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.789780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.790036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.790077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.790242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.790274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.790511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.790542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.790844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.790864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.791080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.791099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.791373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.791392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.791582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.791601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.791797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.791817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.791921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.791939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.792138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.792158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.792285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.792304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.792594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.792618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.792825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.792844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.793050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.793069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.793350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.793369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.793568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.793588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.793900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.793920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.794044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.794063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.794262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.794281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.794567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.794598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.794896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.794927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.795108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.795139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.795363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.795394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.795712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.795744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.795996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.796027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.796237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.990 [2024-07-24 19:07:01.796269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.990 qpair failed and we were unable to recover it. 00:30:16.990 [2024-07-24 19:07:01.796573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.796593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.796802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.796822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.797014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.797034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.797235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.797254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.797550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.797582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.797838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.797870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.798120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.798151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.798401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.798433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.798741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.798773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.798947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.798978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.799146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.799178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.799462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.799498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.799764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.799784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.800013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.800032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.800312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.800332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.800557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.800577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.800779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.800799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.801085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.801104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.801326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.801346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.801580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.801599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.801814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.801833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.802030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.802050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.802187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.802207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.802344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.802363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.802559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.802579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.802871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.802891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.803029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.803048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.803236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.803256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.803510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.803529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.803664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.803696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.803951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.803983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.804262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.804293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.804461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.804491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.804711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.804743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.805076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.805107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.805327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.805358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.805638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.805670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.991 [2024-07-24 19:07:01.805862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.991 [2024-07-24 19:07:01.805881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.991 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.806033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.806052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.806250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.806269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.806474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.806493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.806692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.806712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.806846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.806866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.807046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.807065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.807294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.807313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.807481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.807500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.807706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.807726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.807986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.808006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.808253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.808284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.808583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.808621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.808853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.808884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.809190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.809226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.809507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.809539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.809794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.809814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.809990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.810021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.810325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.810356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.810595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.810635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.810786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.810818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.811172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.811203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.811459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.811490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.811662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.811697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.811812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.811831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.812030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.812049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.812277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.812308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.812540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.812571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.812891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.812923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.813238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.813269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.813499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.813530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.813775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.813807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.814010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.814030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.814180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.814199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.814385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.814405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.814615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.814634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.814859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.814891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.815050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.992 [2024-07-24 19:07:01.815082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.992 qpair failed and we were unable to recover it. 00:30:16.992 [2024-07-24 19:07:01.815306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.815337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.815467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.815498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.815791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.815823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.815995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.816027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.816194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.816225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.816383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.816414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.816714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.816733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.816870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.816889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.817059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.817078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.817332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.817351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.817574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.817593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.817798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.817829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.818041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.818071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.818378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.818410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.818717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.818748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.818979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.819010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.819253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.819290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.819522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.819552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.819720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.819753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.820002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.820034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.820259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.820291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.820529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.820560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.820869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.820888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.821119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.821139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.821420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.821439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.821630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.821651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.821797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.821816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.822018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.822037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.822250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.822281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.822501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.822531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.993 [2024-07-24 19:07:01.822837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.993 [2024-07-24 19:07:01.822870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.993 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.823182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.823201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.823339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.823359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.823562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.823581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.823872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.823892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.824119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.824138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.824403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.824435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.824671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.824691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.824895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.824915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.825049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.825068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.825211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.825230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.825438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.825457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.825712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.825732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.825991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.826011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.826156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.826176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.826368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.826388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.826674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.826706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.826943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.826975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.827268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.827299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.827491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.827523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.827809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.827851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.827973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.827992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.828246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.828267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.828466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.828497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.828717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.828749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.828997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.829017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.829156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.829178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.829398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.829417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.829725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.829745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.830053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.830073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.830300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.830320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.830508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.830539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.830756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.830775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.830924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.830943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.831149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.831180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.831417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.831448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.831582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.831620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.831777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.994 [2024-07-24 19:07:01.831816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.994 qpair failed and we were unable to recover it. 00:30:16.994 [2024-07-24 19:07:01.832017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.832036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.832242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.832261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.832548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.832567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.832713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.832733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.833004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.833023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.833166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.833186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.833353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.833373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.833514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.833533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.833815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.833835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.834028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.834047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.834273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.834303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.834515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.834547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.834799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.834830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.834994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.835014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.835299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.835318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.835575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.835594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.835787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.835819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.836032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.836063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.836365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.836395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.836630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.836662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.836825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.836856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.837113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.837132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.837279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.837298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.837502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.837521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.837719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.837739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.837928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.837946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.838200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.838219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.838438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.838457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.838660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.838707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.838929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.838961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.839267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.839286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.839402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.839422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.839623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.839643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.839865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.839884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.840179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.840213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.840441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.840471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.840811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.995 [2024-07-24 19:07:01.840831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.995 qpair failed and we were unable to recover it. 00:30:16.995 [2024-07-24 19:07:01.841056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.841076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.841358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.841378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.841574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.841593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.841757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.841777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.842032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.842051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.842326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.842345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.842493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.842512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.842770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.842790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.843048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.843068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.843350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.843369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.843565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.843584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.843756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.843776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.844058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.844090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.844315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.844347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.844630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.844661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.844874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.844904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.845179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.845210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.845522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.845541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.845772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.845793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.845997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.846016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.846239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.846270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.846550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.846581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.846912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.846943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.847177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.847196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.847342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.847361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.847615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.847635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.847874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.847905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.848120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.848151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.848460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.848492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.848634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.848666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.848838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.848869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.849035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.849054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.849289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.849321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.849441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.849472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.849700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.849732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.850016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.850047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.850274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.850293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.996 qpair failed and we were unable to recover it. 00:30:16.996 [2024-07-24 19:07:01.850570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.996 [2024-07-24 19:07:01.850589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.850781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.850801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.850989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.851008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.851218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.851238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.851380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.851400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.851608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.851628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.851881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.851901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.852102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.852121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.852330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.852349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.852570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.852589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.852812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.852832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.853086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.853122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.853352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.853383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.853693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.853726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.854050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.854081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.854305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.854336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.854571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.854610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.854905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.854925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.855115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.855135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.855343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.855362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.855508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.855527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.855653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.855676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.855959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.855979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.856180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.856200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.856394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.856413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.856621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.856641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.856786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.856805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.856956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.856976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.857197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.857216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.857473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.857492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.857670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.857690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.858004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.858034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.858285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.858316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.858488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.858519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.997 qpair failed and we were unable to recover it. 00:30:16.997 [2024-07-24 19:07:01.858720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.997 [2024-07-24 19:07:01.858751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.858922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.858954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.859202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.859222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.859529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.859561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.859741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.859773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.859992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.860023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.860245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.860276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.860579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.860617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.860868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.860887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.861092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.861112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.861300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.861319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.861531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.861563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.861866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.861898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.862089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.862119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.862420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.862451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.862696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.862728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.862872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.862891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.863147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.863166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.863368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.863387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.863620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.863652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.863829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.863863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.864180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.864211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.864497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.864528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.864737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.864769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.864934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.864953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.865214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.865246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.865576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.865613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.865863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.865899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.866206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.866238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.866467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.866498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.866674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.866706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.866903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.866935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.867259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.867290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.867511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.867542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.998 [2024-07-24 19:07:01.867873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.998 [2024-07-24 19:07:01.867893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.998 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.868147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.868166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.868451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.868471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.868697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.868728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.869012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.869044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.869301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.869320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.869492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.869512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.869774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.869794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.870086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.870106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.870476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.870496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.870694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.870714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.870919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.870938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.871171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.871190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.871461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.871480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.871683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.871702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.872002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.872033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.872295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.872326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.872623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.872655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.872961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.872981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.873133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.873152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.873416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.873436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.873646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.873666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.873864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.873883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.874077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.874097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.874284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.874303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.874488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.874508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.874702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.874722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.875013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.875032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.875234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.875254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.875459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.875478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.875669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.875701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.875999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.876032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.876313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.876344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.876620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.876657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.876949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.876968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.877242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.877273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.877558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.877590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.877933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.999 [2024-07-24 19:07:01.877964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:16.999 qpair failed and we were unable to recover it. 00:30:16.999 [2024-07-24 19:07:01.878284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.878304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.878560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.878579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.878823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.878855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.879211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.879243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.879399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.879428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.879637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.879655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.879957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.879988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.880209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.880237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.880447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.880475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.880688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.880717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.881023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.881052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.881278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.881307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.881625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.881665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.881978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.882008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.882332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.882361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.882738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.882769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.883101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.883131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.883286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.883305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.883537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.883555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.883745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.883763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.884019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.884037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.884246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.884264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.884403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.884421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.884651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.884670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.884924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.884942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.885142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.885162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.885531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.885558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.885716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.885735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.885938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.885969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.886210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.886240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.886570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.886600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.886789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.886821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.887119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.887149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.887458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.887489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.887719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.887751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.887944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.887968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.888091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.000 [2024-07-24 19:07:01.888121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.000 qpair failed and we were unable to recover it. 00:30:17.000 [2024-07-24 19:07:01.888342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.888373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.888654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.888685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.888892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.888923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.889145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.889175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.889387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.889418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.889651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.889682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.889963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.890001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.890228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.890262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.890454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.890485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.890719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.890739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.891008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.891038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.891315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.891346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.891601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.891644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.891878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.891898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.892092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.892122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.892333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.892363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.892667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.892699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.892906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.892925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.893186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.893216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.893460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.893491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.893723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.893743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.893980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.893998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.894154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.894185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.894505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.894536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.894686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.894726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.894945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.894964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.895187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.895206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.895419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.895450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.895754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.895784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.896041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.896072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.896243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.896273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.896493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.896523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.896869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.896900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.897175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.897207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.897457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.897486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.897774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.897805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.898032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.898051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-07-24 19:07:01.898343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-07-24 19:07:01.898372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.898586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.898630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.898785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.898815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.899116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.899146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.899427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.899463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.899627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.899658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.899896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.899926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.900086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.900105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.900394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.900424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.900665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.900696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.901032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.901062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.901234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.901265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.901548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.901578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.901905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.901937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.902091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.902121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.902342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.902362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.902509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.902528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.902715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.902734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.902993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.903024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.903196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.903227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.903469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.903500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.903716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.903747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.904024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.904044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.904170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.904189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.904412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.904432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.904662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.904682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.904940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.904978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.905232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.905262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.905545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.905576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.905856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.905887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.906129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.906148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.906355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-07-24 19:07:01.906374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-07-24 19:07:01.906576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.906595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.906819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.906838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.906979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.906999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.907184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.907203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.907338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.907357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.907483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.907503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.907713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.907744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.908025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.908056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.908287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.908318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.908540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.908575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.908906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.908938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.909163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.909193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.909503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.909533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.909707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.909738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.909971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.910002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.910223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.910242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.910447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.910467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.910760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.910791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.911037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.911068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.911287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.911318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.911564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.911594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.911909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.911940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.912154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.912184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.912510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.912541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.912765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.912797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.913079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.913113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.913393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.913424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.913796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.913828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.914134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.914165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.914450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.914481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.914694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.914726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.914948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.914979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.915196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.915216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.915469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.915505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.915815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.915846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.916075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.916106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.916330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.916362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-07-24 19:07:01.916621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-07-24 19:07:01.916641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.916896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.916915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.917100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.917119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.917399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.917419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.917634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.917654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.917904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.917923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.918201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.918220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.918420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.918440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.918696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.918715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.918901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.918921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.919109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.919138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.919427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.919459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.919742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.919777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.920038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.920069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.920376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.920406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.920713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.920744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.920892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.920923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.921077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.921107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.921403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.921438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.921617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.921649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.921880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.921911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.922142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.922173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.922391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.922411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.922562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.922582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.922881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.922912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.923194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.923225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.923452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.923482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.923779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.923811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.924023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.924054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.924212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.924243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.924493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.924523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.924742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.924772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.924984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.925003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.925292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.925312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.925458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.925477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.925767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.925786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.926045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.926064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.926256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-07-24 19:07:01.926274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-07-24 19:07:01.926561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.926597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.926945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.926976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.927207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.927237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.927463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.927493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.927829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.927862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.928094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.928125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.928303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.928333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.928558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.928588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.928840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.928871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.929092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.929123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.929403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.929434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.929766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.929797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.930117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.930148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.930401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.930431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.930689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.930738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.930877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.930896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.931181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.931211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.931381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.931411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.931643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.931675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.931889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.931907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.932112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.932131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.932258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.932276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.932530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.932549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.932830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.932851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.933050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.933069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.933379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.933410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.933694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.933725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.934025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.934055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.934208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.934239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.934455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.934486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.934714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.934746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.934904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.934935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.935191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.935221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.935448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.935467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.935582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.935601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-07-24 19:07:01.935891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-07-24 19:07:01.935910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.936114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.936134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.936390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.936422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.936730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.936762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.937012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.937032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.937233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.937263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.937575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.937613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.937782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.937814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.938035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.938054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.938249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.938270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.938421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.938440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.938628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.938647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.938796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.938827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.939007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.939038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.939251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.939282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.939530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.939550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.939752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.939772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.939973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.939993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.940250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.940271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.940532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.940569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.940810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.940841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.941058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.941088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.941303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.941333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.941550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.941580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.941818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.941850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.942077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.942108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.942391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.942422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.942568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.942599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.942893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.942925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.943170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.943202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.943420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.943450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.943639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.943672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.943833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.943854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.944138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-07-24 19:07:01.944170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-07-24 19:07:01.944400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.944431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.944570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.944600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.944945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.944963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.945151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.945169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.945325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.945356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.945577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.945617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.945851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.945882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.946192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.946233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.946455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.946486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.946644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.946676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.947003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.947034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.947256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.947286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.947516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.947547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.947712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.947743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.947992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.948023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.948246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.948275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.948446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.948476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.948643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.948675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.948964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.948984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.949238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.949275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.949518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.949548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.949809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.949840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.950057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.950075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.950311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.950342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.950517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.950548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.950708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.950745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.950965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.950996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.951227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.951257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.951499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.951518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.951660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.951680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.951888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.951920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-07-24 19:07:01.952200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-07-24 19:07:01.952230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.952369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.952411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.952667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.952686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.952876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.952895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.953138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.953158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.953288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.953308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.953493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.953512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.953718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.953738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.953950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.953969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.954251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.954271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.954387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.954407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.954533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.954551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.954779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.954798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.954920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.954951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.955241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.955271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.955494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.955525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.955695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.955726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.955948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.955978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.956196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.956216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.956428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.956447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.956651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.956670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.956878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.956897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.957085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.957105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.957302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.957321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.957450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.957469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.957725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.957744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.957940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.957960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-07-24 19:07:01.958237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-07-24 19:07:01.958256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.958522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.958541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.958758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.958777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.958962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.958981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.959190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.959209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.959354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.959373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.959498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.959517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.959736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.959759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.960045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.960064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.960271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.960290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.960582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.960601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.960863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.960882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.961029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.961048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.961234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.961253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.961473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.961491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.961690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.961710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.961905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.961924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.962057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.962076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.962276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.962296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.962593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.962631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.962786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.962816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.963049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.963080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.963291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.963321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.963549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.963579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.963820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.963839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.963986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.964005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.964114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.964133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.964356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.964375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.964657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.964677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.295 [2024-07-24 19:07:01.964965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.295 [2024-07-24 19:07:01.964997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.295 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.965214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.965245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.965523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.965555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.965855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.965896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.966164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.966183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.966442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.966478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.966703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.966734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.966894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.966925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.967173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.967192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.967324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.967354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.967581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.967631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.967865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.967896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.968204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.968235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.968389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.968419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.968585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.968610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.968719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.968738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.968911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.968931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.969209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.969229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.969424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.969460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.969760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.969792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.969972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.969991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.970159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.970178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.970370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.970401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.970704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.970735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.970972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.971002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.971166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.971198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.971420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.971440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.971580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.971600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.971806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.971838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.972062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.972093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.972248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.972279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.972508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.972540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.972702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.972734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.973033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.296 [2024-07-24 19:07:01.973064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.296 qpair failed and we were unable to recover it. 00:30:17.296 [2024-07-24 19:07:01.973281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.973313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.973487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.973518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.973729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.973760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.973960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.973990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.974242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.974261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.974577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.974616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.974775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.974805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.974977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.975008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.975229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.975259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.975434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.975466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.975701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.975721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.975945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.975965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.976168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.976188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.976385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.976405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.976543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.976578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.976812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.976842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.977009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.977040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.977197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.977227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.977395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.977426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.977678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.977709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.977906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.977937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.978145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.978164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.978324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.978355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.978569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.978601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.978756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.978786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.979061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.979092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.979307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.979326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.979638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.979669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.979830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.979862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.980141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.980172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.980395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.980427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.980639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.980670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.297 [2024-07-24 19:07:01.980892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.297 [2024-07-24 19:07:01.980922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.297 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.981092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.981122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.981342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.981372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.981620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.981653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.981811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.981842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.982004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.982024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.982247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.982278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.982492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.982523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.982883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.982916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.983073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.983093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.983285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.983315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.983480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.983511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.983679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.983711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.983934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.983965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.984275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.984307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.984517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.984547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.984864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.984895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.985219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.985250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.985529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.985560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.985711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.985748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.985997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.986029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.986277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.986308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.986459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.986490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.986775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.986807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.987121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.987153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.987323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.987354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.987520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.987551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.987796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.987827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.988060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.988091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.988244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.988263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.988391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.988410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.988610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.988630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.988819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.988839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.989013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.298 [2024-07-24 19:07:01.989033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.298 qpair failed and we were unable to recover it. 00:30:17.298 [2024-07-24 19:07:01.989224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.989244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.989423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.989453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.989643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.989675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.989892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.989923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.990161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.990192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.990419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.990452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.990629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.990660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.990909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.990939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.991104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.991122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.991284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.991304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.991450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.991470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.991669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.991689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.991835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.991854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.991989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.992009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.992277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.992296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.992411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.992431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.992688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.992724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.992948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.992979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.993194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.993224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.993373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.993392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.993519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.993559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.993782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.993813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.994023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.994054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.994287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.994307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.994498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.994518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.994669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.994693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.994899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.994918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.995122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.995142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.995286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.995305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.995429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.995448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.299 [2024-07-24 19:07:01.995638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.299 [2024-07-24 19:07:01.995658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.299 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.995787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.995817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.995990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.996021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.996274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.996294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.996439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.996458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.996683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.996703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.996825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.996844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.996962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.996981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.997110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.997130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.998179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.998216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.998447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.998468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.998697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.998717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.998862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.998892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.999111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.999142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.999268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.999299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.999519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.999550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.999793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:01.999825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:01.999994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:02.000025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:02.000190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:02.000222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:02.000434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:02.000466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:02.000689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:02.000720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:02.000947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:02.000978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:02.001268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:02.001299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:02.001510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:02.001541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:02.001752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:02.001784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.300 qpair failed and we were unable to recover it. 00:30:17.300 [2024-07-24 19:07:02.002011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.300 [2024-07-24 19:07:02.002042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.002202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.002232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.002443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.002474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.002641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.002673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.002884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.002916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.003136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.003167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.003382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.003401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.003589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.003642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.003864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.003896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.004149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.004168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.004315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.004337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.004539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.004558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.004775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.004795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.004925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.004944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.005091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.005110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.005313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.005332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.005449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.005468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.005616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.005635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.005923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.005942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.006065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.006085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.006214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.006233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.006425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.006456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.006637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.006670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.006879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.006910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.007114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.007146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.007285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.007316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.007568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.007598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.007826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.007857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.008167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.008199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.008445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.008476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.008705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.008737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.008885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.008927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.009122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.009141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.301 [2024-07-24 19:07:02.009266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.301 [2024-07-24 19:07:02.009285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.301 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.009490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.009521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.009739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.009770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.010075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.010107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.010326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.010358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.010658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.010690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.010918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.010949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.011094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.011124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.011333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.011353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.011581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.011600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.011740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.011759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.011976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.012007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.012224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.012255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.012497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.012528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.012684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.012716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.012927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.012958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.013185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.013216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.013372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.013409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.013565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.013596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.013885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.013918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.014229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.014260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.014406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.014437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.014639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.014671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.014898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.014928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.015102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.015121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.015350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.015381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.015667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.015699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.015919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.015950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.302 qpair failed and we were unable to recover it. 00:30:17.302 [2024-07-24 19:07:02.016245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.302 [2024-07-24 19:07:02.016276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.016488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.016519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.016732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.016764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.017070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.017090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.017346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.017382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.017612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.017643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.017808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.017839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.018031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.018062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.018233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.018252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.018446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.018465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.018597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.018623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.018895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.018914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.019121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.019140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.019259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.019279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.019508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.019527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.019792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.019812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.020014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.020034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.020180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.020198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.020413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.020432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.020633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.020665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.020824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.020855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.021030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.021061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.021206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.021237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.021511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.021542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.021724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.021756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.021987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.022018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.022181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.022212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.022494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.022525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.022745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.022777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.022920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.022967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.023121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.023151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.023314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.023334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.023521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.023540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.303 [2024-07-24 19:07:02.023770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.303 [2024-07-24 19:07:02.023790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.303 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.023941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.023960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.024244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.024264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.024391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.024410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.024549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.024569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.024798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.024829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.024996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.025027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.025257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.025288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.025566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.025586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.025721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.025741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.025948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.025979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.026170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.026200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.026345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.026376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.026514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.026545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.026781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.026813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.026962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.026993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.027136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.027167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.027412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.027431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.027577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.027632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.027851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.027882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.028115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.028145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.028359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.028389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.028541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.028572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.028813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.028846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.029090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.029120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.029351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.029370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.029488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.029507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.029650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.029696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.029927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.029959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.030270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.030340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.030575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.030626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.030897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.030929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.031211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.031242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.031452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.031483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.304 [2024-07-24 19:07:02.031708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.304 [2024-07-24 19:07:02.031740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.304 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.031973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.031995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.032150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.032172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.032298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.032328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.032490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.032521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.032774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.032805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.032968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.032987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.033224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.033255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.033536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.033567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.033859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.033892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.034057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.034087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.034255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.034287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.034508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.034529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.034668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.034688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.034808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.034828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.035031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.035050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.035171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.035191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.035394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.035425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.035653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.035686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.035842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.035873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.036038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.036069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.036215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.036247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.036399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.036430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.036622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.036654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.036809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.036840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.037051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.037082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.037248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.305 [2024-07-24 19:07:02.037280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.305 qpair failed and we were unable to recover it. 00:30:17.305 [2024-07-24 19:07:02.037490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.037522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.037694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.037726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.037891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.037923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.038089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.038120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.038370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.038402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.038572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.038614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.038833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.038864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.039018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.039036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.039234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.039265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.039938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.039981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.040191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.040224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.040389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.040410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.040537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.040557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.040766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.040786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.040909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.040928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.041051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.041076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.041217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.041237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.041495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.041514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.041652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.041671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.041823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.041843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.041986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.042005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.042717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.042745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.042856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.042876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.043075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.043095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.043291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.043310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.043564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.043583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.043728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.043748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.043964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.043983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.044114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.044134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.044276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.044295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.044415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.044434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.044565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.306 [2024-07-24 19:07:02.044585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.306 qpair failed and we were unable to recover it. 00:30:17.306 [2024-07-24 19:07:02.044801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.044821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.045501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.045526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.045768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.045789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.045993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.046013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.046141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.046161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.046298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.046317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.046444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.046464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.046723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.046743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.046964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.046983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.047102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.047121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.047325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.047345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.047497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.047517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.048340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.048366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.048619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.048639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.048762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.048782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.048977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.048997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.049279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.049298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.049414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.049434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.049570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.049590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.049724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.049744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.049882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.049902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.050126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.050145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.307 [2024-07-24 19:07:02.050347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.307 [2024-07-24 19:07:02.050367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.307 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.050497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.050521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.050667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.050687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.050909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.050929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.051064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.051083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.051267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.051286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.051415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.051435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.051574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.051594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.051733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.051753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.051849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.051869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.052127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.052146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.052330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.052349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.052470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.052489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.052617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.052637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.052954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.052973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.053169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.053188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.053323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.053342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.053557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.053577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.053773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.053792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.053998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.054017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.054218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.054236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.054380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.054399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.054537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.054556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.054692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.054712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.054837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.054856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.055021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.055040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.055230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.055249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.055392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.055412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.055559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.055578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-07-24 19:07:02.055779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.308 [2024-07-24 19:07:02.055800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.055923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.055943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.056066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.056085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.056261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.056281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.056481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.056500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.056616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.056635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.056794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.056814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.056940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.056960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.057242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.057262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.057499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.057518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.057738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.057757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.057982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.058001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.058121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.058145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.058286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.058306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.058433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.058451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.058651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.058671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.058875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.058895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.059013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.059032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.059176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.059195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.059327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.059346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.059489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.059508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.059701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.059721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.059915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.059934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.060066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.060086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.060297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.060317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.060434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.060453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.060653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.060672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.060863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.060883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.061019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.061038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.061186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.061206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.061322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.061341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.061472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.061492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.061677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.309 [2024-07-24 19:07:02.061697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-07-24 19:07:02.061985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.062005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.062233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.062252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.062421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.062441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.062559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.062579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.062728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.062747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.062941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.062960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.063155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.063174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.063295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.063314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.063500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.063519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.063725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.063745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.063872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.063891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.064028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.064047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.064181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.064200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.064305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.064325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.064462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.064482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.064587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.064610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.064803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.064822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.065020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.065039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.065164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.065184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.065401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.065426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.065552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.065572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.065841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.065860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.066065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.066084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.066271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.066290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.066420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.066440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.066632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.066652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.066788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.066807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.066928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.066947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.067073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.067093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.067190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.067209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-07-24 19:07:02.067326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.310 [2024-07-24 19:07:02.067346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.067490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.067510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.067644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.067663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.067866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.067885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.068019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.068038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.068238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.068257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.068416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.068436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.068633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.068653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.068767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.068787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.068975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.068994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.069114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.069133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.069250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.069269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.069408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.069428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.069563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.069583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.069738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.069759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.069948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.069968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.070082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.070101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.070231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.070250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.070380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.070399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.070618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.070639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.070760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.070779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.070928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.070947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.311 [2024-07-24 19:07:02.071071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.311 [2024-07-24 19:07:02.071090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.311 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.071215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.071234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.071491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.071510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.071644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.071664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.071783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.071802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.072078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.072097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.072216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.072235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.072490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.072513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.072715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.072734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.072853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.072873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.073060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.073079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.073206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.073225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.073355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.073374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.073490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.073509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.073646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.073666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.073860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.073879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.074018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.074038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.074228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.074247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.074382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.074401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.074617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.074636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.074784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.074803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.075011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.075031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.075172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.075192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.075393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.075412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.075545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.075564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.075764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.075785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.075921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.075940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.076060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.076080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.076354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.076373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.076505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.312 [2024-07-24 19:07:02.076525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.312 qpair failed and we were unable to recover it. 00:30:17.312 [2024-07-24 19:07:02.076737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.076757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.076951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.076970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.077086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.077105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.077238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.077257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.077378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.077397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.077523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.077542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.077671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.077691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.077819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.077839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.077963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.077983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.078122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.078141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.078276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.078295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.078484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.078504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.078629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.078649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.078938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.078957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.079075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.079094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.079289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.079308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.079445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.079464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.079591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.079620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.079748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.079768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.079902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.079921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.080049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.080068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.080196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.080215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.080403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.080422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.080545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.080564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.080685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.080705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.080835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.080855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.081045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.081065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.081261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.081281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.081422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.313 [2024-07-24 19:07:02.081441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.313 qpair failed and we were unable to recover it. 00:30:17.313 [2024-07-24 19:07:02.081578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.081598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.081748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.081767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.081913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.081933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.082130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.082149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.082383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.082403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.082531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.082550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.082674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.082695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.082796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.082815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.082939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.082958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.083090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.083110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.083316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.083336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.083459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.083478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.083616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.083635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.083891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.083911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.084030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.084049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.084315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.084334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.084526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.084545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.084675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.084694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.084885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.084904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.085024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.085043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.085218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.085237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.085421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.085440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.085700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.085721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.085841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.085860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.085967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.085986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.086245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.086264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.086508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.086528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.086682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.086703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.086825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.086844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.086985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.087005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.314 qpair failed and we were unable to recover it. 00:30:17.314 [2024-07-24 19:07:02.087208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.314 [2024-07-24 19:07:02.087228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.087455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.087474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.087595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.087633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.087750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.087769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.087904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.087924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.088043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.088063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.088192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.088211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.088412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.088431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.088552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.088571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.088775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.088795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.088920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.088940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.089193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.089213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.089337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.089356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.089478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.089497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.089779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.089799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.089930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.089949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.090133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.090152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.090357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.090377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.090576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.090597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.090729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.090749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.090883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.090903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.091091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.091111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.091259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.091279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.091414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.091433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.091564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.091583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.091725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.091747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.092010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.092029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.092216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.092235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.092376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.092395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.092534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.092554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.092759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.092779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.092908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.092928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.093046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.093066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.093252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.093272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.093466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.093485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.315 qpair failed and we were unable to recover it. 00:30:17.315 [2024-07-24 19:07:02.093681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.315 [2024-07-24 19:07:02.093700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.093830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.093849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.093997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.094016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.094153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.094172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.094372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.094392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.094502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.094522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.094647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.094667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.094806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.094826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.095108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.095128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.095331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.095351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.095485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.095504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.095708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.095727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.095843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.095862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.095999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.096019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.096213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.096233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.096356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.096376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.096494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.096513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.096729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.096750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.096873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.096893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.097045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.097065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.097258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.097278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.097532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.097551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.097688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.097709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.097841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.097860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.098054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.098073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.098261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.098280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.316 [2024-07-24 19:07:02.098465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.316 [2024-07-24 19:07:02.098484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.316 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.098600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.098626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.098744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.098763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.098926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.098946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.099077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.099103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.099363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.099382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.099510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.099529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.099670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.099690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.099822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.099841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.099971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.099990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.100124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.100143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.100291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.100311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.100571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.100613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.100831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.100863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.101106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.101137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.101294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.101314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.101430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.101449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.101579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.101598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.101743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.101775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.102010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.102042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.317 [2024-07-24 19:07:02.102258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.317 [2024-07-24 19:07:02.102278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.317 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.102424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.102444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.102698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.102718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.102925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.102944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.103202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.103241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.103395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.103426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.103581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.103635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.103802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.103834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.104048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.104079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.104297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.104328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.104508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.104549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.104686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.104706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.104892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.104932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.105068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.105100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.105411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.105442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.105597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.105621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.105822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.105853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.105995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.106026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.106241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.106273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.106508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.106540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.106871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.106903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.107060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.107092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.107320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.107352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.107568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.107599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.107764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.107801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.107959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.107992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.108222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.108252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.108493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.108524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.108673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.318 [2024-07-24 19:07:02.108705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.318 qpair failed and we were unable to recover it. 00:30:17.318 [2024-07-24 19:07:02.108867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.108898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.109062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.109094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.109316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.109348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.109515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.109558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.109762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.109783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.110044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.110075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.110228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.110259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.110479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.110524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.110713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.110733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.110880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.110899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.111031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.111063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.111276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.111308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.111647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.111680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.111830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.111862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.112017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.112048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.112216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.112247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.112385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.112418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.112679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.112711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.112881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.112912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.113066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.113098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.113244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.113274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.113554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.113585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.113797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.113829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.113981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.114014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.114159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.114179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.114386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.114416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.114573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.114614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.114949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.114981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.115263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.319 [2024-07-24 19:07:02.115293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.319 qpair failed and we were unable to recover it. 00:30:17.319 [2024-07-24 19:07:02.115454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.115474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.115690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.115710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.115845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.115864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.116002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.116021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.116229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.116259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.116510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.116541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.116767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.116790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.116910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.116930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.117147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.117178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.117322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.117354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.117562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.117593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.117761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.117781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.117979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.117998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.118195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.118215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.118441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.118460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.118585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.118608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.118869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.118889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.119087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.119107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.119301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.119321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.119516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.119535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.119674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.119695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.119835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.119855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.119978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.119997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.120135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.120166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.120378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.120409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.120623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.120656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.120941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.120961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.320 [2024-07-24 19:07:02.121095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.320 [2024-07-24 19:07:02.121115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.320 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.121261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.121282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.121417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.121438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.121632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.121652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.121854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.121873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.122066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.122085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.122276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.122296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.122487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.122516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.122680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.122711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.122856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.122888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.123030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.123062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.123222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.123252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.123421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.123453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.123620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.123651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.123807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.123838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.123990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.124021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.124252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.124282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.124530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.124550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.124679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.124699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.124823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.124846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.124983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.125015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.125160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.125191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.125340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.125371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.125528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.125560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.125718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.125750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.125964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.125995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.126207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.126227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.126358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.126390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.126537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.126569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.126803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.126834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.127068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.127099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.127249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.127281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.127523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.321 [2024-07-24 19:07:02.127554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.321 qpair failed and we were unable to recover it. 00:30:17.321 [2024-07-24 19:07:02.127821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.127853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.128018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.128049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.128268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.128299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.128464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.128495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.128710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.128742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.128905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.128938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.129082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.129114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.129269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.129300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.129588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.129628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.129851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.129871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.130056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.130093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.130259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.130291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.130514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.130546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.130723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.130744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.130966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.130998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.131145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.131176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.131386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.131427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.131574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.131594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.131810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.131841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.132068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.132099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.132355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.132387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.132560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.132591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.132816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.132848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.132994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.133026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.133173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.133205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.133487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.133518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.133670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.322 [2024-07-24 19:07:02.133693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.322 qpair failed and we were unable to recover it. 00:30:17.322 [2024-07-24 19:07:02.133823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.133843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.133960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.134001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.134164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.134195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.134365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.134395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.134613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.134633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.134825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.134856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.134998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.135029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.135188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.135221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.135368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.135388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.135632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.135664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.135880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.135912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.136077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.136108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.136335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.136368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.136596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.136635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.136779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.136798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.136917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.136936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.137125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.137144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.137273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.137293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.137410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.137429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.137620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.137640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.137831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.137851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.138075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.138095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.138289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.138308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.323 [2024-07-24 19:07:02.138422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.323 [2024-07-24 19:07:02.138441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.323 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.138600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.138639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.138802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.138832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.139010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.139043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.139199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.139230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.139411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.139442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.139608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.139641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.139799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.139830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.140051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.140083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.140290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.140310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.140429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.140449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.140640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.140662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.140796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.140816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.140944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.140964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.141100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.141130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.141362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.141393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.141545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.141582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.141747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.141767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.141966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.141998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.142157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.142188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.142407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.142437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.142591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.142633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.142815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.142846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.143164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.143184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.143370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.143390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.143548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.143579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.143757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.143789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.144000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.144032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.144244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.144275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.144425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.144456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.144623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.144642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.144864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.144895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.145143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.145174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.324 [2024-07-24 19:07:02.145396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.324 [2024-07-24 19:07:02.145416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.324 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.145555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.145576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.145772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.145792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.146030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.146073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.146301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.146332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.146558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.146590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.146740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.146771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.147017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.147048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.147212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.147243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.147400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.147432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.147662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.147682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.147821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.147851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.148067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.148097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.148240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.148271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.148552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.148571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.148708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.148727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.148961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.148992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.149314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.149344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.149555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.149575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.149790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.149809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.149946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.149978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.150161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.150192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.150359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.150390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.150551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.150586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.150811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.150843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.151059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.151090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.151232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.151264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.151523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.151553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.151721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.151753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.152006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.152036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.152189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.152220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.152373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.325 [2024-07-24 19:07:02.152404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.325 qpair failed and we were unable to recover it. 00:30:17.325 [2024-07-24 19:07:02.152559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.152599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.152747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.152767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.153039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.153071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.153342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.153373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.153544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.153564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.153718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.153738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.153907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.153939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.154087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.154118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.154332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.154363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.154601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.154648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.154789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.154809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.154950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.154970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.155159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.155178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.155383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.155402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.155610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.155630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.155859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.155878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.156015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.156035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.156156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.156175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.156325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.156344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.156625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.156658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.156832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.156863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.157097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.157128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.157352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.157382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.157601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.157640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.157863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.157883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.158082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.158101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.158294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.158313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.158442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.158462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.158657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.158677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.158895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.158925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.159086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.159118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.159290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.159326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.159484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.159515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.159679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.326 [2024-07-24 19:07:02.159710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.326 qpair failed and we were unable to recover it. 00:30:17.326 [2024-07-24 19:07:02.159921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.159952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.160102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.160134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.160376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.160396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.160537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.160557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.160772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.160805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.161003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.161034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.161188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.161220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.161369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.161399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.161621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.161653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.161864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.161884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.162075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.162105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.162258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.162289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.162513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.162545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.162763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.162782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.164166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.164205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.164353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.164374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.164676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.164708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.164882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.164913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.165087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.165117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.165344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.165362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.165556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.165575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.165702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.165722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.165934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.165952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.166175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.166194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.166455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.166523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.166737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.166772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.327 [2024-07-24 19:07:02.166991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.327 [2024-07-24 19:07:02.167022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.327 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.167306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.167336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.167554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.167584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.167758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.167791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.168015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.168046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.168277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.168309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.168471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.168503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.168784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.168815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.168987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.169018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.169181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.169212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.169358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.169389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.169632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.169665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.169889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.169920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.170087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.170118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.170269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.170301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.170473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.170504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.170787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.170819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.170978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.171009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.171239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.171270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.171438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.171470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.171685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.171717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.171974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.172005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.172169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.172200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.172508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.172539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.172708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.172740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.172879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.172904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.173051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.173070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.173267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.173286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.173475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.173495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.173641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.173660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.173792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.173823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.328 [2024-07-24 19:07:02.174046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.328 [2024-07-24 19:07:02.174077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.328 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.174301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.174333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.174554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.174573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.174725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.174757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.174926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.174957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.175122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.175153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.175314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.175345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.175556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.175586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.175758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.175777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.175973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.175992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.176237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.176267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.176477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.176508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.176840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.176871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.177075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.177107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.177263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.177293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.177517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.177547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.177781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.177814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.178128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.178159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.178308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.178339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.178480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.178524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.178657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.178678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.178805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.178825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.179010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.179029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.179173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.179205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.179473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.179503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.179647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.179687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.179962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.180007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.180166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.180197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.180520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.180551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.180717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.180737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.180864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.180883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.181029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.181059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.329 [2024-07-24 19:07:02.181339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.329 [2024-07-24 19:07:02.181370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.329 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.181592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.181634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.181918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.181954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.182147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.182178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.182340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.182371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.182537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.182567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.182809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.182841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.183057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.183087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.183254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.183286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.183500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.183531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.183678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.183710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.183890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.183921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.184142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.184174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.184327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.184358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.184663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.184694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.184922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.184942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.185069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.185089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.185351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.185382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.185615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.185647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.185813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.185844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.186000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.186032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.186313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.186344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.186572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.186610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.186767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.186787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.186926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.186946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.187067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.187087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.187281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.187312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.187466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.187497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.187661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.187693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.187928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.187947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.188081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.188100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.188237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.188268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.188510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.188540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.188782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.188813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.330 [2024-07-24 19:07:02.188975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.330 [2024-07-24 19:07:02.189006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.330 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.189152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.189183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.189403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.189434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.189596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.189635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.189791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.189822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.189975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.190006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.190163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.190193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.190411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.190442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.190726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.190767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.190910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.190941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.191156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.191187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.191485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.191516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.191666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.191698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.191906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.191939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.192093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.192112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.192346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.192376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.192594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.192633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.192780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.192810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.193112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.193143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.193374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.193405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.193553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.193573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.193776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.193796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.193919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.193938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.194232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.194263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.194472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.194503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.194651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.194683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.194847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.194878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.195102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.195132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.195367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.195409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.195615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.195635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.195792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.195812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.195939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.195959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.196175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.196210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.196376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.196407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.196564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.196595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.196768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.196811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.197135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.197167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.197389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.331 [2024-07-24 19:07:02.197420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.331 qpair failed and we were unable to recover it. 00:30:17.331 [2024-07-24 19:07:02.197589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.197630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.197786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.197806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.197997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.198028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.198184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.198214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.198424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.198444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.198631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.198651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.198788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.198808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.198997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.199016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.199141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.199161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.199381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.199411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.199558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.199594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.199744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.199775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.199968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.199999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.200154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.200184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.200407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.200437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.200580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.200620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.200855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.200886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.201044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.201075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.201218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.201248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.201414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.201444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.201676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.201707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.201856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.201888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.202035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.202054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.202243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.202264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.202527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.202558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.202784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.202815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.202958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.202990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.203150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.203182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.203409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.203440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.203593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.203638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.203865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.203895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.204066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.204097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.204379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.204409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.204585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.204624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.204875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.204907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.205125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.205155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.205367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.332 [2024-07-24 19:07:02.205397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.332 qpair failed and we were unable to recover it. 00:30:17.332 [2024-07-24 19:07:02.205558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.205578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.205717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.205738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.205929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.205949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.206162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.206181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.206317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.206348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.206653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.206684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.206834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.206865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.207085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.207103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.207225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.207244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.207433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.207453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.207590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.207637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.207792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.207823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.208037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.208067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.208399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.208436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.208585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.208623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.208850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.208882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.209094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.209125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.209354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.209385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.209677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.209723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.209890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.209922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.210081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.210112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.210345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.210375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.210544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.210564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.210680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.210701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.210823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.210843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.210962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.210982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.211237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.211257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.211469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.211488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.211622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.211641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.211771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.211809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.212041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.212073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.212296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.212326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.212537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.333 [2024-07-24 19:07:02.212568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.333 qpair failed and we were unable to recover it. 00:30:17.333 [2024-07-24 19:07:02.212733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.212753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.212886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.212906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.213108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.213139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.213308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.213340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.213488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.213507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.213707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.213726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.213911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.213931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.214166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.214186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.214442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.214479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.214635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.214667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.214830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.214862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.215159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.215190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.215408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.215439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.215595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.215620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.215818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.215850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.216008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.216040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.216201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.216232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.216464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.216496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.216710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.216742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.216906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.216937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.217091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.217128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.217356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.217376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.217569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.217600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.217850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.217881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.218099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.218130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.218353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.218384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.218556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.218575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.218772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.218792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.218921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.218941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.219070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.219089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.219286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.219318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.219464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.334 [2024-07-24 19:07:02.219495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.334 qpair failed and we were unable to recover it. 00:30:17.334 [2024-07-24 19:07:02.219661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.219693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.219864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.219884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.220121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.220152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.220456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.220488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.220646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.220666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.220854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.220873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.221070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.221089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.221315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.221335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.221519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.221539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.221659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.221680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.221823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.221842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.222045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.222065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.222321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.222357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.222692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.222712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.222910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.222941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.223158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.223189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.223347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.223379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.223540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.223571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.223824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.223856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.224019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.224051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.224224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.224254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.224403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.224435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.224591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.224633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.224856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.224888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.225036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.225068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.225302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.225334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.225548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.225578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.225784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.225815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.225983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.226019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.226194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.226225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.226387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.226418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.226728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.226774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.226912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.226931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.227075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.227106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.227326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.227356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.227523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.227543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.227797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.227817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.335 qpair failed and we were unable to recover it. 00:30:17.335 [2024-07-24 19:07:02.228011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.335 [2024-07-24 19:07:02.228031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.228162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.228181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.228379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.228412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.228640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.228673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.228954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.228984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.229230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.229263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.229437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.229480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.229732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.229753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.229952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.229972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.230232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.230252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.230457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.230488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.230645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.230677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.230987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.231018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.231235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.231255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.231462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.231481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.231674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.231693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.231894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.231914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.232119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.232151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.232307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.232338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.232610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.232641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.232796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.232828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.233055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.233086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.233370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.233401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.233623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.233643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.233843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.233863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.234054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.234084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.234247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.234278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.234500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.234531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.234783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.234815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.235096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.235116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.235374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.235411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.235615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.235652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.235873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.235903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.236118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.236137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.236353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.236373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.236488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.236507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.236707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.336 [2024-07-24 19:07:02.236727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.336 qpair failed and we were unable to recover it. 00:30:17.336 [2024-07-24 19:07:02.236847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.236867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.236994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.237038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.237278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.237309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.237592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.237643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.237849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.237869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.238076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.238106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.238344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.238376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.238626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.238658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.238990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.239021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.239196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.239228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.239385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.239416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.239784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.239804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.240002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.240021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.240311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.240342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.240519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.240551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.240768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.240802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.241090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.241126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.241412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.241443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.241587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.241627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.241854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.241885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.242166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.242185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.242323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.242342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.242481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.242500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.242772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.242791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.242993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.243011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.243128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.243147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.243346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.243365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.243629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.243660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.243941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.243973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.244268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.244300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.244540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.244570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.244832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.244852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.245052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.245083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.245302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.245333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.245551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.245583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.245815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.245836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.245981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.246000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.246140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.246160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.246376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.337 [2024-07-24 19:07:02.246406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.337 qpair failed and we were unable to recover it. 00:30:17.337 [2024-07-24 19:07:02.246620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.246640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.246839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.246859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.247145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.247165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.247370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.247390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.247587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.247620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.247901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.247920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.248118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.248138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.248234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.248253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.248387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.248406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.248514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.248533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.248749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.248769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.248892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.248911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.249116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.249136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.249272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.249292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.249503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.249533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.249713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.249745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.250020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.250051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.250277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.250308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.250630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.250661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.250814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.250845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.251065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.251096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.251419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.251450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.251594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.251637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.251876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.251907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.252070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.252090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.252318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.252337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.252548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.252579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.252764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.252784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.252978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.253008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.253170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.253201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.253415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.253446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.253755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.253775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.253899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.253918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.254105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.254136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.338 [2024-07-24 19:07:02.254376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.338 [2024-07-24 19:07:02.254407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.338 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.254638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.254658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.254852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.254873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.255017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.255036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.255246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.255277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.255515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.255545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.255814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.255835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.256041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.256060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.256175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.256194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.256503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.256534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.256780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.256802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.256960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.256991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.257206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.257236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.257478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.257509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.257788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.257807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.257940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.257960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.258171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.258201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.258352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.258383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.258545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.258576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.258868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.258887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.259025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.259045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.259244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.259275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.259482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.259513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.259703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.259722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.259944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.259963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.260160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.260179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.260368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.260388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.260594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.260631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.260962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.261008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.261229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.261247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.261383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.261402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.261529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.261549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.261767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.261787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.261906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.261926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.262136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.262166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.262428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.262459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.262676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.262708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.262864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.262897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.263132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.263162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.263376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.263395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.263523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.263543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.263752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.263784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.339 qpair failed and we were unable to recover it. 00:30:17.339 [2024-07-24 19:07:02.264005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.339 [2024-07-24 19:07:02.264037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.264287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.264317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.264488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.264519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.264666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.264706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.264800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.264820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.265059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.265089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.265398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.265429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.265600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.265638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.265795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.265826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.266074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.266105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.266354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.266386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.266544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.266563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.266692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.266712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.266925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.266945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.267150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.267170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.267303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.267323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.267465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.267484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.267632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.267652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.267791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.267810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.268026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.268057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.268232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.268264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.268411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.268441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.268661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.268694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.268859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.268889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.269098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.269130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.269264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.269295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.269518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.269554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.269800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.269832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.270069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.270088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.270209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.270228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.270434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.270453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.270576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.270595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.270744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.270764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.271042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.271073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.271227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.271259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.271412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.271442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.271592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.271633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.271795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.271826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.271981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.272012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.272246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.272266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.272528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.272547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.340 qpair failed and we were unable to recover it. 00:30:17.340 [2024-07-24 19:07:02.272689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.340 [2024-07-24 19:07:02.272709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.272907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.272927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.273067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.273097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.273326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.273357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.273582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.273619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.273769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.273801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.274018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.274049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.274208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.274239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.274389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.274410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.274528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.274548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.274676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.274696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.274822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.274841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.274968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.274987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.275107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.275126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.275320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.275339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.275526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.275545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.275665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.275685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.275832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.275852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.276001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.276020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.276218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.276249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.276583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.276625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.276854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.276874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.277076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.277095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.341 [2024-07-24 19:07:02.277308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.341 [2024-07-24 19:07:02.277327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.341 qpair failed and we were unable to recover it. 00:30:17.610 [2024-07-24 19:07:02.277637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.610 [2024-07-24 19:07:02.277657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.610 qpair failed and we were unable to recover it. 00:30:17.610 [2024-07-24 19:07:02.277845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.610 [2024-07-24 19:07:02.277867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.610 qpair failed and we were unable to recover it. 00:30:17.610 [2024-07-24 19:07:02.277997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.610 [2024-07-24 19:07:02.278016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.610 qpair failed and we were unable to recover it. 00:30:17.610 [2024-07-24 19:07:02.278161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.610 [2024-07-24 19:07:02.278181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.610 qpair failed and we were unable to recover it. 00:30:17.610 [2024-07-24 19:07:02.278374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.610 [2024-07-24 19:07:02.278393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.610 qpair failed and we were unable to recover it. 00:30:17.610 [2024-07-24 19:07:02.278581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.610 [2024-07-24 19:07:02.278601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.610 qpair failed and we were unable to recover it. 00:30:17.610 [2024-07-24 19:07:02.278796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.278815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.278943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.278962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.279154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.279173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.279291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.279310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.279447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.279466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.279599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.279623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.279825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.279845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.280035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.280054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.280189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.280209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.280345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.280365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.280571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.280590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.280735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.280754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.280939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.280957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.281154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.281174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.281302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.281321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.281514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.281533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.281712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.281732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.281919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.281938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.282059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.282078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.282264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.282283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.282570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.282600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.282785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.282817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.282962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.282993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.283207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.283238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.283460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.283490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.283733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.283765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.283928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.283948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.284079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.284099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.284356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.284375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.284493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.284512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.284710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.284729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.284972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.285003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.285256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.285286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.285502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.285533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.285678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.285710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.285933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.285969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.286194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.286225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.286465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.611 [2024-07-24 19:07:02.286496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.611 qpair failed and we were unable to recover it. 00:30:17.611 [2024-07-24 19:07:02.286657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.286690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.286838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.286868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.287176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.287207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.287446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.287476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.287637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.287669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.287828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.287859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.288071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.288090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.288306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.288325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.288512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.288531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.288682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.288701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.288887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.288907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.289038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.289057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.289185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.289204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.289351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.289370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.289572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.289592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.289887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.289919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.290173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.290203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.290435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.290465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.290608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.290629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.290838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.290872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.291030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.291061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.291231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.291262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.291482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.291514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.291758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.291791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.292035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.292080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.292210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.292230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.292428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.292447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.292676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.292695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.292914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.292933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.293068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.293106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.293355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.293386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.293601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.293660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.293825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.293844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.293986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.294017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.294300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.294331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.294502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.294534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.294775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.294795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.612 qpair failed and we were unable to recover it. 00:30:17.612 [2024-07-24 19:07:02.294931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.612 [2024-07-24 19:07:02.294967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.295271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.295302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.295471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.295503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.295659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.295691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.295995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.296028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.296245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.296276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.296429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.296461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.296769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.296801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.297031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.297063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.297227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.297258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.297407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.297438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.297750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.297783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.297940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.297971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.298144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.298164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.298332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.298364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.298522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.298553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.298720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.298752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.298979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.299010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.299175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.299206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.299453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.299485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.299654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.299686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.299915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.299946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.300102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.300121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.300251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.300270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.300389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.300408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.300530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.300569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.300795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.300827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.301067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.301098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.301323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.301354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.301516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.301546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.301784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.301816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.301992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.302012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.302152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.302183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.302337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.302369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.302595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.302635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.302783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.302802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.303011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.303041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.303288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.613 [2024-07-24 19:07:02.303319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.613 qpair failed and we were unable to recover it. 00:30:17.613 [2024-07-24 19:07:02.303535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.303565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.303731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.303751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.303958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.303994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.304210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.304241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.304466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.304497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.304657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.304689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.304941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.304960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.305149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.305180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.305433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.305465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.305636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.305656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.305802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.305822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.306010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.306051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.306206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.306238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.306456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.306488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.306722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.306754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.306902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.306934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.307161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.307193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.307556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.307587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.307758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.307790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.307936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.307967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.308136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.308166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.308417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.308448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.308595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.308636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.308812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.308842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.309060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.309091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.309314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.309346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.309516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.309547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.309757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.309778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.309898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.309917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.310166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.310201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.310401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.310432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.310651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.310684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.310912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.310943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.311212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.311232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.311364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.311396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.311621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.311654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.311807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.311837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.614 qpair failed and we were unable to recover it. 00:30:17.614 [2024-07-24 19:07:02.312117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.614 [2024-07-24 19:07:02.312156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.312357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.312376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.312641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.312661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.312780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.312799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.312986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.313018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.313299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.313335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.313567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.313597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.313853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.313884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.314130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.314160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.314330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.314361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.314643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.314675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.314859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.314890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.315048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.315079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.315244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.315264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.315489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.315520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.315811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.315842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.316083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.316114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.316286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.316305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.316502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.316521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.316795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.316815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.316968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.316988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.317117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.317137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.317361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.317380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.317534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.317554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.317682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.317702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.317814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.317833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.317953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.317972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.318104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.318123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.318266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.318286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.318478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.318510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.318674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.318706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.615 qpair failed and we were unable to recover it. 00:30:17.615 [2024-07-24 19:07:02.318921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.615 [2024-07-24 19:07:02.318952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.319267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.319336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.319505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.319540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.319706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.319740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.319901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.319932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.320165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.320197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.320410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.320442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.320723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.320745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.320911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.320930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.321209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.321240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.321558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.321589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.321739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.321759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.321975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.321993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.322147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.322166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.322300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.322330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.322590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.322629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.322798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.322829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.323074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.323105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.323313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.323333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.323538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.323568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.323754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.323787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.323997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.324028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.324243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.324274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.324556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.324587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.324827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.324858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.325024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.325055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.325277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.325297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.325485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.325504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.325789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.325810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.326009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.326028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.326173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.326193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.326392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.326423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.326649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.326681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.326897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.326927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.327097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.327127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.327294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.327326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.327546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.327577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.327896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.616 [2024-07-24 19:07:02.327928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.616 qpair failed and we were unable to recover it. 00:30:17.616 [2024-07-24 19:07:02.328098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.328117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.328322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.328354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.328574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.328611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.328776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.328812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.329006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.329037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.329188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.329219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.329500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.329530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.329754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.329786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.330098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.330139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.330339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.330358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.330546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.330566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.330713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.330733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.330866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.330885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.331139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.331158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.331299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.331330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.331562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.331593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.331772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.331804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.332041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.332072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.332223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.332254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.332414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.332446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.332594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.332633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.332932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.332963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.333198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.333217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.333403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.333422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.333621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.333641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.333782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.333801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.333988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.334032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.334256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.334287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.334449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.334479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.334705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.334737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.334973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.334993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.335221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.335240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.335455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.335475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.335684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.335704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.335917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.335936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.336088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.336119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.336277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.336310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.617 [2024-07-24 19:07:02.336466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.617 [2024-07-24 19:07:02.336497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.617 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.336651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.336683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.336860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.336892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.337176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.337206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.337420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.337451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.337678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.337709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.337935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.337972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.338196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.338228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.338554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.338585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.338723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.338754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.338924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.338955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.339114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.339134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.339342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.339373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.339636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.339668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.339900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.339930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.340073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.340104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.340317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.340347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.340560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.340591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.340764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.340788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.340984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.341016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.341187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.341218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.341445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.341477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.341637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.341669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.341927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.341946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.342157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.342188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.342346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.342377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.342610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.342641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.342801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.342832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.343045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.343076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.343291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.343321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.343547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.343578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.343741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.343774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.343936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.343966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.344128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.344160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.344396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.344427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.344712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.344743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.345025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.345056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.345284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.345315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.618 qpair failed and we were unable to recover it. 00:30:17.618 [2024-07-24 19:07:02.345475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.618 [2024-07-24 19:07:02.345505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.345816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.345849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.346016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.346047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.346300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.346331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.346549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.346580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.346730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.346762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.346922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.346953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.347185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.347216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.347524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.347564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.347798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.347830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.348054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.348085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.348302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.348333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.348549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.348580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.348841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.348872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.349097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.349129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.349356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.349377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.349568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.349588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.349746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.349766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.349971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.350001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.350244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.350275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.350503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.350534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.350702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.350733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.350896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.350928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.351069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.351099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.351360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.351391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.351614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.351635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.351828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.351848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.351991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.352010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.352147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.352166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.352438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.352469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.352701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.352733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.352944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.352975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.353192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.353222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.353405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.353436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.353766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.353799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.353977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.354009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.354291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.354321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.619 [2024-07-24 19:07:02.354571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.619 [2024-07-24 19:07:02.354625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.619 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.354849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.354881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.355108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.355139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.355351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.355370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.355528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.355560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.355796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.355829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.356051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.356070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.356206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.356237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.356408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.356441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.356753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.356784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.356938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.356969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.357140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.357176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.357410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.357442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.357590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.357631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.357862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.357892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.358143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.358174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.358430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.358462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.358647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.358678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.358911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.358942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.359151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.359170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.359292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.359323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.359540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.359571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.359739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.359771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.359917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.359948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.361458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.361493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.361852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.361872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.362133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.362152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.362384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.362403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.362686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.362717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.362893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.362924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.363205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.363236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.363545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.363575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.363783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.363815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.620 [2024-07-24 19:07:02.364030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.620 [2024-07-24 19:07:02.364061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.620 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.364314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.364345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.364508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.364539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.364755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.364787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.364960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.364980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.365202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.365222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.365429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.365448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.365678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.365711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.365931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.365963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.366177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.366217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.366442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.366462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.366685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.366705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.366898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.366929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.367089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.367120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.367275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.367307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.367463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.367483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.367617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.367637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.367769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.367789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.367991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.368027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.368203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.368234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.368385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.368416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.368592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.368635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.368827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.368858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.369159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.369189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.369360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.369391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.369545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.369576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.369836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.369867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.370016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.370047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.370207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.370237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.370520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.370551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.370774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.370805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.371083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.371113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.371370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.371402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.371575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.371616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.371830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.371861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.372093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.372113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.372230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.372249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.372385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.621 [2024-07-24 19:07:02.372404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.621 qpair failed and we were unable to recover it. 00:30:17.621 [2024-07-24 19:07:02.372616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.372648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.372949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.372981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.373146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.373165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.373354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.373373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.373584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.373610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.373755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.373774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.373901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.373921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.374102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.374122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.374407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.374439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.374583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.374633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.374791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.374822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.375053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.375084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.375317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.375349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.375574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.375615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.375792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.375824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.376033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.376053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.376339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.376358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.376502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.376521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.376644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.376664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.376854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.376874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.376999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.377036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.377265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.377297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.377570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.377600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.377781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.377812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.378035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.378066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.378312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.378332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.378504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.378536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.378702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.378733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.379027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.379059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.379280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.379311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.379528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.379559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.379804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.379837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.379987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.380006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.380200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.380220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.380371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.380401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.380544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.380575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.380788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.622 [2024-07-24 19:07:02.380857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.622 qpair failed and we were unable to recover it. 00:30:17.622 [2024-07-24 19:07:02.381088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.381122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.381356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.381387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.381633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.381667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.381823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.381855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.382028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.382059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.382391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.382425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.382665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.382698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.382850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.382882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.383030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.383062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.383254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.383294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.383425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.383444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.383566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.383585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.383718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.383739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.383930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.383949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.384073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.384092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.384213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.384232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.384370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.384390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.384636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.384655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.384870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.384890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.385097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.385135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.385265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.385297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.385463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.385494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.385664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.385696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.385861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.385898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.386198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.386217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.386402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.386421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.386566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.386585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.386781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.386801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.386984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.387003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.387197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.387228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.387380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.387411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.387717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.387748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.387888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.387907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.388093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.388112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.388312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.388331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.388453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.388472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.388683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.388715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.623 qpair failed and we were unable to recover it. 00:30:17.623 [2024-07-24 19:07:02.389009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.623 [2024-07-24 19:07:02.389041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.389360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.389391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.389550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.389582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.389803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.389835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.390066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.390106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.390235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.390255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.390375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.390394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.390521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.390541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.390681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.390721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.390940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.390974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.391218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.391249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.391428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.391447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.391577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.391597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.391713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.391733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.392003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.392033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.392207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.392238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.392360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.392392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.392614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.392646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.392856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.392889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.393066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.393097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.393304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.393323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.393524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.393555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.393720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.393752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.393981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.394013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.394326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.394356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.394667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.394700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.394861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.394897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.395058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.395089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.395375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.395406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.395572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.395610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.395774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.395806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.396040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.396071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.396299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.624 [2024-07-24 19:07:02.396330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.624 qpair failed and we were unable to recover it. 00:30:17.624 [2024-07-24 19:07:02.396662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.396682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.396967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.397008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.397170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.397202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.397395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.397425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.397568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.397599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.397852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.397884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.398059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.398077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.398288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.398319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.398646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.398678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.398843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.398875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.399070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.399089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.399294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.399324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.399495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.399526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.399811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.399843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.400088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.400107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.400303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.400322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.400564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.400584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.400896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.400916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.401057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.401077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.401336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.401367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.401539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.401571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.401862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.401893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.402070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.402089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.402288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.402319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.402644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.402677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.402960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.402992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.403216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.403246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.403485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.403517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.403738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.403769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.403934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.403966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.404132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.404164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.404404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.404435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.404707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.404738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.404900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.404941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.405098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.405118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.405378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.405410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.405553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.405584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.625 [2024-07-24 19:07:02.405746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.625 [2024-07-24 19:07:02.405777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.625 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.405937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.405968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.406207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.406238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.406414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.406446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.406683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.406716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.406876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.406908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.407239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.407270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.407486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.407518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.407690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.407722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.408017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.408048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.408205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.408237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.408521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.408552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.408722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.408755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.408899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.408929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.409166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.409197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.409415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.409445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.409595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.409633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.409797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.409829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.409990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.410035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.410317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.410337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.410564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.410584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.410789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.410810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.410960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.410991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.411214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.411284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.411522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.411555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.411883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.411917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.412168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.412202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.412516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.412547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.412733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.412765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.413017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.413048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.413201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.413231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.413394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.413425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.413576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.413618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.413819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.413850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.414075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.414105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.414251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.414283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.414535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.414576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.414805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.414837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.626 qpair failed and we were unable to recover it. 00:30:17.626 [2024-07-24 19:07:02.415146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.626 [2024-07-24 19:07:02.415176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.415335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.415367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.415532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.415563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.415789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.415821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.416041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.416072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.416225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.416256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.416474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.416508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.416734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.416766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.416946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.416977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.417186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.417206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.417423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.417442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.417575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.417595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.417805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.417837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.418081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.418112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.418342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.418372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.418599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.418637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.418919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.418950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.419175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.419206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.419372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.419403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.419658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.419678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.419817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.419836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.420023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.420043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.420162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.420182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.420315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.420335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.420533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.420565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.420755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.420788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.421009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.421040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.421197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.421229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.421392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.421424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.421635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.421666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.421958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.421990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.422205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.422236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.422474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.422505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.422678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.422711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.422938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.422968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.423118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.423138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.423368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.423399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.423699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.627 [2024-07-24 19:07:02.423732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.627 qpair failed and we were unable to recover it. 00:30:17.627 [2024-07-24 19:07:02.423988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.424025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.424242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.424273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.424491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.424523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.424802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.424822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.424966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.424986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.425198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.425229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.425446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.425477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.425691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.425722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.425953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.425984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.426203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.426222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.426423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.426443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.426572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.426592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.426731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.426751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.426895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.426914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.427061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.427080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.427267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.427286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.427424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.427444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.427563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.427583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.427783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.427803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.427947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.427966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.428161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.428192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.428351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.428381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.428714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.428747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.428921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.428952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.429098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.429118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.429321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.429352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.429612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.429644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.429794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.429826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.430070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.430102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.430261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.430291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.430612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.430643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.430803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.430835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.431083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.431115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.431274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.431306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.431546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.431577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.431891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.431923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.432098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.432128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.628 qpair failed and we were unable to recover it. 00:30:17.628 [2024-07-24 19:07:02.432294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.628 [2024-07-24 19:07:02.432313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.432459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.432478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.432676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.432696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.432833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.432856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.432996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.433027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.433313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.433345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.433576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.433623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.433866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.433897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.434120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.434151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.434376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.434409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.434563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.434595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.434821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.434854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.435105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.435136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.435387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.435407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.435558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.435589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.435759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.435790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.436078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.436110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.436276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.436308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.436432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.436451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.436613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.436634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.436793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.436813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.436951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.436982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.437141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.437173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.437401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.437432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.437644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.437665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.437925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.437956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.438167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.438198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.438509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.438541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.438823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.438855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.439021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.439051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.439257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.439325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.439521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.439556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.439860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.629 [2024-07-24 19:07:02.439894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.629 qpair failed and we were unable to recover it. 00:30:17.629 [2024-07-24 19:07:02.440195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.440226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.440389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.440420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.440546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.440576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.440757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.440790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.441005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.441036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.441210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.441241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.441512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.441549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.441866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.441899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.442080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.442111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.442397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.442417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.442610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.442630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.442858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.442878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.443078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.443109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.443326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.443357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.443517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.443549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.443800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.443833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.443987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.444007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.444162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.444181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.444322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.444356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.444523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.444554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.444847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.444878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.445098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.445131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.445367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.445399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.445552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.445582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.445835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.445867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.446087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.446117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.446344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.446375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.446590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.446616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.446805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.446825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.447081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.447101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.447312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.447343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.447576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.447596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.447889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.447908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.448057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.448077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.448215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.448245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.448405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.448435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.448675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.448707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.630 [2024-07-24 19:07:02.448933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.630 [2024-07-24 19:07:02.448969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.630 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.449124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.449156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.449311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.449331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.449559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.449591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.449823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.449855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.450166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.450197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.450350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.450382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.450540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.450560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.450776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.450796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.450993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.451025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.451195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.451227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.451377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.451409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.451576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.451595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.451741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.451777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.451937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.451968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.452161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.452192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.452363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.452394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.452547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.452578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.452803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.452836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.453067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.453087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.453290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.453321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.453566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.453598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.453893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.453924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.454086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.454117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.454400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.454431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.454653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.454685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.454846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.454878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.455065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.455096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.455321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.455341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.455485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.455522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.455860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.455892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.456104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.456135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.456296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.456316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.456570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.456616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.456794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.456825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.457063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.457083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.457280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.457312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.631 [2024-07-24 19:07:02.457458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.631 [2024-07-24 19:07:02.457489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.631 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.457723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.457755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.457971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.458002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.458167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.458204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.458420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.458440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.458707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.458738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.458961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.458992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.459204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.459235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.459552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.459584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.459821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.459853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.460019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.460049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.460262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.460282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.460478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.460498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.460683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.460703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.460985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.461004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.461153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.461172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.461359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.461378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.461523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.461543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.461742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.461763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.461969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.461988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.462186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.462206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.462409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.462430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.462574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.462594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.462705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.462726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.462851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.462870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.463074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.463093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.463207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.463227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.463370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.463389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.463574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.463593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.463738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.463757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.463902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.463922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.464202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.464221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.464354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.464373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.464510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.464529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.464651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.464672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.464820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.464851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.465008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.465039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.465263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.632 [2024-07-24 19:07:02.465283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.632 qpair failed and we were unable to recover it. 00:30:17.632 [2024-07-24 19:07:02.465473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.465493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.465696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.465728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.465963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.465994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.466194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.466214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.466416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.466448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.466756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.466793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.466946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.466978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.467197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.467228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.467509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.467539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.467693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.467714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.467984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.468003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.468226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.468258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.468426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.468457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.468616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.468636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.468757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.468777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.468969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.469000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.469234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.469265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.469421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.469451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.469759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.469779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.469971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.469991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.470128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.470159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.470394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.470425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.470660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.470691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.470835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.470866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.471078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.471097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.471288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.471308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.471439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.471459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.471590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.471633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.471785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.471804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.471990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.472010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.472197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.472217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.472338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.472358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.472629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.472660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.633 [2024-07-24 19:07:02.472915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.633 [2024-07-24 19:07:02.472945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.633 qpair failed and we were unable to recover it. 00:30:17.634 [2024-07-24 19:07:02.473228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.634 [2024-07-24 19:07:02.473259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.634 qpair failed and we were unable to recover it. 00:30:17.634 [2024-07-24 19:07:02.473472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.634 [2024-07-24 19:07:02.473503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.634 qpair failed and we were unable to recover it. 00:30:17.634 [2024-07-24 19:07:02.473665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.634 [2024-07-24 19:07:02.473696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.634 qpair failed and we were unable to recover it. 00:30:17.634 [2024-07-24 19:07:02.474032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.634 [2024-07-24 19:07:02.474063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.634 qpair failed and we were unable to recover it. 00:30:17.634 [2024-07-24 19:07:02.474229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.634 [2024-07-24 19:07:02.474260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.634 qpair failed and we were unable to recover it. 00:30:17.634 [2024-07-24 19:07:02.474569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.634 [2024-07-24 19:07:02.474600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.634 qpair failed and we were unable to recover it. 00:30:17.634 [2024-07-24 19:07:02.474892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.634 [2024-07-24 19:07:02.474924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.634 qpair failed and we were unable to recover it. 00:30:17.634 [2024-07-24 19:07:02.475085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.634 [2024-07-24 19:07:02.475116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.634 qpair failed and we were unable to recover it. 00:30:17.634 [2024-07-24 19:07:02.475355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.475386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.475640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.475673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.475926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.475957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.476109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.476146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.476451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.476471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.476733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.476774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.476942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.476974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.477122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.477152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.477367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.477398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.477628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.477660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.477945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.477975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.478142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.478174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.478388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.478419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.478644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.478663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.478852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.478872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.479165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.479197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.479412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.479442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.479759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.479791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.480077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.480108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.480231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.480250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.480458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.480489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.480724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.480756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.481044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.481075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.481340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.481371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.481546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.481576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.481808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.481828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.482093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.482136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.482297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.482329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.482535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.482554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.482691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.482711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.482837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.482856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.483121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.483141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.483284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.483314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.483524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.483555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.483793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.483825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.484059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.484090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.484314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.635 [2024-07-24 19:07:02.484333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.635 qpair failed and we were unable to recover it. 00:30:17.635 [2024-07-24 19:07:02.484546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.484565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.484765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.484785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.485040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.485060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.485263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.485283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.485419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.485438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.485634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.485654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.485857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.485880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.486086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.486106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.486295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.486326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.486557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.486589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.486822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.486855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.487088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.487120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.487377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.487408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.487632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.487664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.487960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.487991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.488206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.488237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.488404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.488424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.488612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.488631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.488760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.488779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.488967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.488986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.489184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.489216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.489444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.489464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.489718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.489738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.489922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.489941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.490200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.490230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.490462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.490492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.490710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.490741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.491058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.491080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.491296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.491327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.491491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.491523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.491692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.491724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.491956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.491988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.492217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.492248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.492570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.492610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.492919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.492951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.493095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.493125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.493433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.493463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.636 qpair failed and we were unable to recover it. 00:30:17.636 [2024-07-24 19:07:02.493689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.636 [2024-07-24 19:07:02.493722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.494006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.494037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.494370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.494401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.494648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.494668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.494869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.494888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.495173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.495214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.495424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.495455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.495645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.495677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.495936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.495967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.496119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.496141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.496343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.496374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.496558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.496589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.496792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.496824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.497049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.497080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.497314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.497345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.497502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.497522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.497724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.497743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.497881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.497900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.498097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.498116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.498323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.498342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.498558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.498589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.498764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.498795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.499096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.499127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.499380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.499410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.499552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.499583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.499874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.499905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.500074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.500105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.500364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.500384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.500676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.500708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.500923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.500955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.501269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.501300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.501555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.501586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.501886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.501917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.502142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.502174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.502339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.502358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.502670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.502689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.502834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.502854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.503068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.637 [2024-07-24 19:07:02.503088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.637 qpair failed and we were unable to recover it. 00:30:17.637 [2024-07-24 19:07:02.503234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.503253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.503398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.503418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.503637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.503658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.503902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.503922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.504057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.504076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.504280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.504299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.504512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.504531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.504738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.504758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.504948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.504968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.505122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.505153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.505364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.505394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.505626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.505664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.505820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.505839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.506168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.506199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.506430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.506461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.506682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.506714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.506944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.506974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.507260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.507291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.507502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.507534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.507682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.507713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.507928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.507948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.508223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.508253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.508468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.508499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.508804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.508824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.509030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.509049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.509359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.509378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.509583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.509623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.509792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.509823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.510065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.510097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.510406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.510437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.510737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.510758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.511055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.511086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.511319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.511350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.511634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.511666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.512007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.512037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.512323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.512353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.512566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.512597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.638 [2024-07-24 19:07:02.512834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.638 [2024-07-24 19:07:02.512866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.638 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.513184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.513215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.513479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.513499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.513703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.513723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.514020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.514052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.514335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.514365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.514611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.514643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.514895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.514926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.515162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.515194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.515446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.515487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.515673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.515693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.516027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.516058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.516286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.516317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.516483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.516514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.516794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.516817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.517034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.517053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.517190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.517209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.517491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.517511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.517703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.517723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.517955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.517974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.518193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.518212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.518420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.518439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.518644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.518664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.518867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.518886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.519088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.519108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.519258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.519277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.519425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.519445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.519663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.519695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.519870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.519902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.520096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.520129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.520438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.520469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.520693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.520713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.520924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.639 [2024-07-24 19:07:02.520955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.639 qpair failed and we were unable to recover it. 00:30:17.639 [2024-07-24 19:07:02.521191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.521223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.521441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.521471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.521709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.521741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.522024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.522056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.522339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.522370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.522653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.522685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.522993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.523024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.523249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.523280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.523595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.523634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.523867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.523898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.524126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.524158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.524445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.524481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.524705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.524737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.525020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.525052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.525297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.525328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.525636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.525656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.525752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.525772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.526028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.526048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.526329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.526360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.526668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.526699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.526932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.526964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.527180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.527216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.527445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.527464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.527650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.527670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.527875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.527895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.528098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.528117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.528377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.528396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.528664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.528696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.528926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.528945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.529088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.529107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.529346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.529365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.529565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.529596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.529853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.529884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.530137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.530167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.530471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.530490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.530691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.530711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.530932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.530952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.640 [2024-07-24 19:07:02.531254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.640 [2024-07-24 19:07:02.531284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.640 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.531498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.531530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.531723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.531755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.531985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.532015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.532174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.532205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.532425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.532456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.532613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.532633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.532919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.532950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.533240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.533270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.533433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.533463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.533696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.533716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.533947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.533967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.534162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.534193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.534351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.534381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.534553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.534583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.534813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.534833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.535088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.535107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.535318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.535337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.535551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.535570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.535728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.535748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.535945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.535965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.536159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.536178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.536404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.536424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.536686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.536706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.536897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.536921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.537181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.537218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.537444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.537475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.537781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.537813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.538039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.538070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.538280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.538299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.538439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.538459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.538723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.538743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.538941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.538972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.539186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.539218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.539532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.539562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.539877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.539908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.540213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.540244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.540499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.540529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.641 qpair failed and we were unable to recover it. 00:30:17.641 [2024-07-24 19:07:02.540818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.641 [2024-07-24 19:07:02.540850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.541116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.541163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.541310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.541330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.541585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.541609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.541891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.541910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.542167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.542186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.542469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.542488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.542746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.542765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.543026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.543046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.543328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.543347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.543633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.543665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.543931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.543963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.544217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.544248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.544412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.544443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.544700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.544732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.544893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.544913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.545116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.545156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.545372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.545402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.545632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.545664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.545990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.546021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.546248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.546279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.546513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.546532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.546745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.546765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.546971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.546990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.547157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.547176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.547323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.547355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.547502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.547539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.547791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.547822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.548117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.548147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.548370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.548400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.548708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.548728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.548915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.548935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.549229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.549260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.549484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.549514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.549689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.549721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.549943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.549963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.550197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.550216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.550401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.642 [2024-07-24 19:07:02.550421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.642 qpair failed and we were unable to recover it. 00:30:17.642 [2024-07-24 19:07:02.550612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.550631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.550833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.550863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.551040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.551072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.551300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.551331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.551545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.551565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.551836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.551856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.551982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.552001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.552147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.552166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.552438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.552469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.552723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.552755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.553063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.553094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.553374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.553404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.553661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.553692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.554007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.554038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.554197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.554217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.554408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.554428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.554568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.554620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.554853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.554884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.555141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.555171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.555393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.555424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.555656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.555675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.555862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.555881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.556015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.556034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.556250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.556269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.556407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.556426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.556696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.556727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.556956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.556986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.557298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.557329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.557563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.557599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.557821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.557853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.558132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.558171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.558322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.558341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.558596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.558620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.558875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.643 [2024-07-24 19:07:02.558894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.643 qpair failed and we were unable to recover it. 00:30:17.643 [2024-07-24 19:07:02.559102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.559121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.559327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.559347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.559600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.559657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.559890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.559921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.560201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.560233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.560540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.560572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.560796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.560828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.560979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.561011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.561349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.561380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.561613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.561646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.561789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.561820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.562075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.562106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.562356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.562387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.562612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.562644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.562962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.562981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.563237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.563256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.563450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.563481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.563714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.563746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.563974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.564004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.564168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.564200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.564511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.564541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.564798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.564829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.564977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.565008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.565312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.565342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.565559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.565589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.565881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.565912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.566132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.566162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.566324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.566344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.566536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.566566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.566872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.566904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.567147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.567178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.567403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.567422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.567744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.567781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.568089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.568120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.568368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.568400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.568650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.644 [2024-07-24 19:07:02.568682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.644 qpair failed and we were unable to recover it. 00:30:17.644 [2024-07-24 19:07:02.568848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.568880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.569072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.569103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.569336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.569367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.569636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.569655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.569922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.569942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.570078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.570097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.570291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.570310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.570500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.570520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.570686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.570706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.570993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.571012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.571156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.571175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.571387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.571406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.571650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.571683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.571829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.571860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.572020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.572051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.572208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.572239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.572474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.572493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.572753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.572785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.572947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.572978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.573161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.573192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.573358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.573388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.573670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.573701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.574042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.574073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.574302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.574333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.574586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.574608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.574795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.574818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.575008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.575027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.575289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.575319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.575542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.575563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.575764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.575784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.575987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.576018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.576349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.576380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.576547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.576578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.576867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.576898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.577128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.577159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.577474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.577505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.577636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.577657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.577944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.645 [2024-07-24 19:07:02.577975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.645 qpair failed and we were unable to recover it. 00:30:17.645 [2024-07-24 19:07:02.578285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.578317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.578610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.578642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.578946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.578977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.579139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.579170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.579415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.579447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.579677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.579697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.579956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.579987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.580143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.580175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.580435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.580466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.580696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.580728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.581013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.581044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.581325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.581355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.581622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.581654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.581976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.582007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.582318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.582350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.582649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.582681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.583009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.583041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.583347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.583379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.583717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.583748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.583918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.583949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.584193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.584213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.584472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.584502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.584785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.584816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.585044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.585075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.585202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.585234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.585514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.585546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.585818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.585850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.586066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.586103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.586249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.586281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.586531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.586562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.586800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.586832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.587113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.587145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.587363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.587394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.587708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.587740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.588078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.588109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.588392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.588424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.588722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.646 [2024-07-24 19:07:02.588741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.646 qpair failed and we were unable to recover it. 00:30:17.646 [2024-07-24 19:07:02.589001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.589031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.589354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.589385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.589620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.589651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.589958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.589977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.590174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.590194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.590481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.590500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.590694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.590714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.590979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.591023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.591258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.591292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.591531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.591561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.591790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.591821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.592029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.592048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.592269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.592300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.592617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.592650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.592874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.592906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.593128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.593159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.593387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.593418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.593742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.593762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.593946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.593966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.594247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.594278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.594588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.594626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.594853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.594884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.595100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.595132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.595377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.595408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.595620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.595653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.595958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.595989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.596327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.596357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.596597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.596634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.596946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.596977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.597197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.597217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.597427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.597463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.597778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.597821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.598101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.598145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.598361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.598392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.598703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.598735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.598963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.598994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.599303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.599334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.599626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.647 [2024-07-24 19:07:02.599659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.647 qpair failed and we were unable to recover it. 00:30:17.647 [2024-07-24 19:07:02.599911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.599942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.600100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.600131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.600431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.600462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.600627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.600659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.600970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.601013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.601213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.601232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.601432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.601452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.601710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.601730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.601936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.601967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.602196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.602226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.602470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.602501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.602742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.602762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.602965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.602985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.603266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.603285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.603589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.603612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.603920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.603951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.604255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.604287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.604597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.604636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.604798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.604830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.605008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.605039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.605282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.605314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.605592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.605632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.605942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.605974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.606124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.606143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.606309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.606328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.606484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.606503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.606785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.606805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.606953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.606983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.607212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.607242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.607407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.648 [2024-07-24 19:07:02.607438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.648 qpair failed and we were unable to recover it. 00:30:17.648 [2024-07-24 19:07:02.607753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.607800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.608027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.608047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.608237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.608263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.608520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.608539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.608739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.608758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.608943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.608962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.609249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.609269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.609384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.609403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.609590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.609614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.609918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.609938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.610166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.610186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.610335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.610354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.610568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.610587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.610797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.610816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.611005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.611024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.611163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.611182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.611373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.611392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.611593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.611619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.611902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.611921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.612203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.930 [2024-07-24 19:07:02.612222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.930 qpair failed and we were unable to recover it. 00:30:17.930 [2024-07-24 19:07:02.612418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.612438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.612658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.612677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.612806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.612826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.612996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.613015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.613212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.613231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.613425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.613444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.613643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.613662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.613922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.613953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.614183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.614214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.614451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.614483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.614709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.614740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.615046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.615066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.615274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.615293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.615480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.615500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.615621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.615641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.615784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.615803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.615989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.616008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.616236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.616256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.616489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.616508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.616658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.616677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.616815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.616835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.617107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.617139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.617381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.617417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.617649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.617681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.617903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.617922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.618189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.618227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.618458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.618490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.619526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.619561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.619782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.619804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.620085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.620105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.620367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.620387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.620527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.620546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.620779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.620799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.620942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.620961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.621158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.621178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.621367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.621386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.931 [2024-07-24 19:07:02.621616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.931 [2024-07-24 19:07:02.621637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.931 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.621773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.621793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.622091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.622110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.622315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.622335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.622482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.622500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.622662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.622681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.622935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.622955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.623096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.623116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.623371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.623390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.623526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.623546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.623682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.623703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.623897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.623917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.624152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.624171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.624384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.624404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.624634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.624655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.624799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.624818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.625012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.625031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.625168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.625187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.625319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.625340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.625490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.625509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.625768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.625789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.625936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.625955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.626213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.626233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.626487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.626506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.626773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.626794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.626926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.626946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.627220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.627243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.627442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.627461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.627662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.627683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.627926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.627945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.628187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.628207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.628303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.628322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.628450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.628469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.628615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.628635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.628920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.932 [2024-07-24 19:07:02.628939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.932 qpair failed and we were unable to recover it. 00:30:17.932 [2024-07-24 19:07:02.629073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.629094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.629286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.629305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.629513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.629533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.629784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.629804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.630062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.630082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.630220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.630239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.630505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.630525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.630731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.630752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.630967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.630987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.631182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.631201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.631427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.631447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.631654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.631674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.631812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.631832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.631981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.632001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.632220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.632240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.632494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.632514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.632721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.632740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.632864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.632883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.633084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.633104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.633305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.633324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.633469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.633489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.633629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.633650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.633785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.633804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.633992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.634011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.634207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.634226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.634457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.634477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.634580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.634599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.634801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.634820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.634941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.634960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.635148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.635167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.635306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.635326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.635530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.635553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.635695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.635715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.635859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.635878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.636026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.636046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.636179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.636199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.636323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.636342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.933 qpair failed and we were unable to recover it. 00:30:17.933 [2024-07-24 19:07:02.636537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-07-24 19:07:02.636556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.636697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.636716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.636905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.636924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.637044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.637064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.637267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.637286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.637479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.637498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.637639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.637659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.637789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.637809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.637966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.637986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.638086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.638106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.638362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.638381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.638583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.638617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.638740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.638760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.638878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.638897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.639096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.639115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.639251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.639270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.639397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.639416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.639566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.639586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.639709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.639730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.639917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.639937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.640124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.640143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.640335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.640356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.640545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.640564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.640704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.640723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.640855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.640873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.641075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.641094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.641227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.641251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.641388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.641408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.641614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.641635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.641916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.641935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.642124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.642144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.642350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.642369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.642554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.642573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.642696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.642716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.642852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.642875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.934 [2024-07-24 19:07:02.643078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.934 [2024-07-24 19:07:02.643098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.934 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.643222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.643241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.643353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.643372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.643647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.643667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.643856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.643875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.644080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.644099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.644219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.644238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.644438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.644457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.644656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.644675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.644934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.644953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.645071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.645090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.645223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.645242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.645362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.645381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.645509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.645528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.645669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.645689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.645885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.645905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.646097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.646115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.646323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.646342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.646544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.646563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.646763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.646783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.646968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.646987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.647251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.647271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.647461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.647480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.647665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.647685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.647798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.647817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.647966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.647986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.648109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.648128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.648333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.648352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.648560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.935 [2024-07-24 19:07:02.648579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.935 qpair failed and we were unable to recover it. 00:30:17.935 [2024-07-24 19:07:02.648771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.648791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.648986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.649005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.649206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.649225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.649494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.649513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.649713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.649732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.649858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.649877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.649993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.650013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.650133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.650153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.650280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.650299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.650514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.650532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.650752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.650776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.650906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.650926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.651121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.651140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.651422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.651441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.651559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.651579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.651699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.651719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.651906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.651925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.652066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.652085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.652218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.652238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.652374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.652393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.652702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.652722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.652849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.652869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.653073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.653093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.653238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.653257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.653455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.653474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.653598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.653622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.653828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.653848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.654131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.654151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.654408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.654427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.654698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.654719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.654841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.654860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.655049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.655068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.655289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.655308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.655458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.655477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.655667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.655688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.655828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.655847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.656050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.936 [2024-07-24 19:07:02.656070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.936 qpair failed and we were unable to recover it. 00:30:17.936 [2024-07-24 19:07:02.656273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.656292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.656478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.656497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.656685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.656705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.656824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.656844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.657126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.657146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.657334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.657354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.657539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.657558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.657839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.657860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.658057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.658077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.658198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.658218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.658416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.658437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.658625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.658644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.658834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.658853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.659055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.659078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.659339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.659359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.659573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.659593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.659728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.659747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.659931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.659949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.660148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.660167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.660312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.660332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.660532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.660553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.660841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.660860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.661029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.661048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.661234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.661253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.661403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.661421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.661544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.661564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.661862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.661882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.662123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.662143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.662292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.662311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.662445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.662464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.662722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.662742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.662935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.662954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.663257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.663276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.663479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.663498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.663695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.663714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.663947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.663967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.664159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.937 [2024-07-24 19:07:02.664178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-24 19:07:02.664366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.664386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.664697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.664716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.664913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.664933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.665077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.665097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.665218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.665237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.665381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.665400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.665634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.665653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.665855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.665875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.666010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.666029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.666149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.666169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.666498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.666517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.666716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.666736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.666934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.666954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.667163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.667182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.667397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.667417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.667632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.667652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.667901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.667923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.668121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.668141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.668336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.668356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.668559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.668578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.668790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.668810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.668942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.668962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.669079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.669099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.669380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.669399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.669654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.669675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.669877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.669897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.670155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.670174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.670373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.670393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.670613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.670634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.670903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.670923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.671095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.671115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.671255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.671275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.671480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.671499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.671807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.671826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.672019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.672038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.672326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.672346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.672559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.672579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-24 19:07:02.672813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.938 [2024-07-24 19:07:02.672833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.673026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.673045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.673304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.673323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.673620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.673639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.673842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.673861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.673980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.673999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.674227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.674247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.674464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.674484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.674670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.674690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.674891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.674911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.675106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.675126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.675356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.675375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.675587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.675612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.675916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.675935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.676241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.676260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.676489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.676509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.676776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.676797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.677058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.677078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.677217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.677237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.677332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.677354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.677621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.677642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.677902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.677922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.678122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.678141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.678336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.678356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.678575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.678594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.678788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.678808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.679003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.679022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.679222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.679241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.679389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.679409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.679620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.679641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.679928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.679947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.680163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.680182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.680384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.680402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.680609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.680629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-24 19:07:02.680859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.939 [2024-07-24 19:07:02.680878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.681029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.681047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.681353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.681373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.681629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.681649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.681932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.681951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.682155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.682174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.682307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.682326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.682615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.682636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.682866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.682885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.683086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.683104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.683231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.683250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.683457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.683477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.683692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.683712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.683995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.684014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.684216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.684234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.684511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.684531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.684815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.684834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.685085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.685104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.685224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.685243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.685442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.685462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.685752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.685772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.685962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.685981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.686112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.686131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.686328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.686348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.686550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.686569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.686768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.686788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.687067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.687086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.687288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.687307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.687474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.687493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.940 [2024-07-24 19:07:02.687763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.940 [2024-07-24 19:07:02.687783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.940 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.688002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.688021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.688253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.688272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.688527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.688546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.688737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.688757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.688992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.689012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.689238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.689257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.689443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.689462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.689637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.689657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.689788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.689807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.690021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.690040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.690239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.690260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.690516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.690535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.690738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.690758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.690990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.691009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.691268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.691287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.691499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.691520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.691778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.691798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.692053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.692073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.692275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.692294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.692552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.692572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.692787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.692819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.693106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.693137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.693361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.693398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.693682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.693715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.693927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.693958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.694172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.694203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.694420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.694451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.694674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.694706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.694989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.695020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.695140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.695159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.695360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.695380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.695582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.695620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.695848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.695880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.696167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.696187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.696427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.696446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.941 [2024-07-24 19:07:02.696584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.941 [2024-07-24 19:07:02.696607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.941 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.696801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.696821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.697024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.697044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.697326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.697345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.697574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.697594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.697740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.697760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.697889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.697929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.698094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.698124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.698339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.698369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.698622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.698654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.698883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.698914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.699172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.699192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.699343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.699362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.699494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.699514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.699801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.699821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.699940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.699959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.700258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.700289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.700624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.700656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.700871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.700903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.701136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.701167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.701336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.701368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.701590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.701642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.701868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.701899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.702132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.702165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.702311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.702342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.702580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.702620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.702840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.702873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.703087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.703124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.703445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.703465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.703670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.703690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.703882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.703901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.704097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.704116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.704300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.704319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.704433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.704452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.942 qpair failed and we were unable to recover it. 00:30:17.942 [2024-07-24 19:07:02.704661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.942 [2024-07-24 19:07:02.704681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.704829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.704848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.705049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.705067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.705268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.705299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.705515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.705548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.705758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.705778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.705904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.705923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.706132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.706152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.706444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.706463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.706677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.706697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.706983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.707015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.707350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.707381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.707597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.707637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.707947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.707988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.708218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.708248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.708486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.708517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.708742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.708762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.708896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.708916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.709118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.709137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.709279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.709298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.709589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.709628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.709875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.709906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.710065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.710096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.710405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.710437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.710650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.710682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.710897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.710928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.711158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.711188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.711475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.711506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.711788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.711820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.712026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.712057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.712228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.712259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.712509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.712539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.712845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.712877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.713184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.713221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.713502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.713533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.713694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.713726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.714007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.943 [2024-07-24 19:07:02.714038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.943 qpair failed and we were unable to recover it. 00:30:17.943 [2024-07-24 19:07:02.714236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.714267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.714499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.714529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.714740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.714772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.715027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.715058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.715372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.715402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.715719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.715739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.715966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.715985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.716209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.716229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.716521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.716553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.716895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.716927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.717089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.717108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.717352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.717382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.717526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.717558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.717877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.717909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.718191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.718223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.718451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.718482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.718796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.718827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.719054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.719085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.719390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.719410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.719723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.719743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.719952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.719983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.720209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.720240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.720395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.720426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.720585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.720624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.720800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.720831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.721044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.721074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.721227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.721256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.721589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.721636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.721866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.721896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.722203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.722234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.722400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.722430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.722681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.722713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.723039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.723058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.944 qpair failed and we were unable to recover it. 00:30:17.944 [2024-07-24 19:07:02.723334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.944 [2024-07-24 19:07:02.723352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.723622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.723642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.723793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.723812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.724003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.724026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.724281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.724301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.724559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.724578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.724803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.724823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.724979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.724998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.725279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.725298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.725398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.725418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.725562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.725581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.725785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.725805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.726002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.726021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.726302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.726322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.726464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.726482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.726634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.726655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.726880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.726900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.727089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.727110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.727244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.727264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.727527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.727546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.727837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.727857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.728092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.728111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.728299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.728317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.728524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.728543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.728772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.728794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.729009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.729039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.729330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.729360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.729644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.729675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.729993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.730024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.730335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.730366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.730539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.730571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.945 qpair failed and we were unable to recover it. 00:30:17.945 [2024-07-24 19:07:02.730890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.945 [2024-07-24 19:07:02.730923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.731218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.731249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.731467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.731498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.731812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.731843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.732179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.732210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.732370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.732401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.732657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.732688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.732967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.732997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.733180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.733211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.733423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.733443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.733756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.733776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.734077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.734107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.734329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.734365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.734617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.734648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.734885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.734916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.735140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.735158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.735443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.735474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.735737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.735769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.735991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.736023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.736240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.736271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.736554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.736594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.736801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.736821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.737099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.737129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.737467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.737497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.737725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.737756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.738044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.738075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.738395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.738426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.738714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.738746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.738961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.738991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.739286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.739317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.739546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.739577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.739814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.739845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.740069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.740101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.740326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.740358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.740687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.740719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.741004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.741036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.741351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.741383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.741700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.946 [2024-07-24 19:07:02.741733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.946 qpair failed and we were unable to recover it. 00:30:17.946 [2024-07-24 19:07:02.742056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.742087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.742404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.742435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.742750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.742781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.743096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.743128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.743447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.743479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.743706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.743738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.744109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.744140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.744368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.744400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.744680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.744713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.744962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.744994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.745310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.745341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.745640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.745671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.745980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.746012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.746238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.746257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.746399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.746422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.746681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.746713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.746931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.746961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.747270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.747300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.747612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.747644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.747857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.747887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.748148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.748179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.748404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.748424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.748735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.748755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.749101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.749132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.749433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.749464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.749775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.749807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.750106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.750137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.750444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.750475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.750714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.750746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.750978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.751010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.751322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.751354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.751564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.751595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.751891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.751922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.752253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.752285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.752576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.752595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.947 [2024-07-24 19:07:02.752907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.947 [2024-07-24 19:07:02.752927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.947 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.753238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.753270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.753425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.753456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.753690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.753722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.754049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.754085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.754355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.754387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.754558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.754590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.754884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.754916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.755228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.755260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.755544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.755574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.755863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.755894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.756175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.756205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.756491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.756511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.756717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.756737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.756953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.756972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.757307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.757337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.757630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.757661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.757975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.758007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.758296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.758327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.758645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.758683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.759014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.759044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.759285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.759315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.759667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.759699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.759916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.759947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.760269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.760299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.760566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.760596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.760836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.760867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.761082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.761113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.761274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.761306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.761519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.761549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.761906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.761937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.762262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.762281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.762567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.762598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.762836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.762868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.763180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.763211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.763499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.948 [2024-07-24 19:07:02.763519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.948 qpair failed and we were unable to recover it. 00:30:17.948 [2024-07-24 19:07:02.763827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.763846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.764167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.764187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.764494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.764513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.764827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.764858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.765169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.765200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.765500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.765531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.765862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.765893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.766203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.766235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.766468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.766500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.766756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.766787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.767036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.767056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.767349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.767368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.767640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.767673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.767981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.768013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.768302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.768322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.768638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.768670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.768974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.769005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.769311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.769341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.769644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.769677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.769980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.770011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.770312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.770331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.770519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.770538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.770684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.770705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.770841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.770864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.771127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.771159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.771451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.771481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.771796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.771828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.772123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.772155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.772483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.949 [2024-07-24 19:07:02.772514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.949 qpair failed and we were unable to recover it. 00:30:17.949 [2024-07-24 19:07:02.772778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.772810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.773090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.773109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.773300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.773320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.773586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.773639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.773972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.774003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.774322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.774354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.774642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.774675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.774998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.775029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.775260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.775291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.775516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.775547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.775870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.775902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.776134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.776169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.776363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.776383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.776667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.776699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.776929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.776960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.777321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.777353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.777695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.777728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.778029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.778060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.778374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.778405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.778717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.778749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.778963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.778994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.779223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.779256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.779568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.779599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.779919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.779950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.780108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.780140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.780455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.780485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.780778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.780810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.781052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.781071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.781357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.781376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.781661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.781693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.781929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.781961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.782264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.782310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.782540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.782560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.782819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.782839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.783136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.783172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.783387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.783419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.950 [2024-07-24 19:07:02.783735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.950 [2024-07-24 19:07:02.783767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.950 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.784067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.784098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.784402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.784434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.784759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.784792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.785051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.785082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.785317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.785348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.785671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.785692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.785929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.785949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.786209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.786228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.786372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.786392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.786545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.786565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.786870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.786903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.787197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.787229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.787471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.787491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.787787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.787820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.788107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.788138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.788464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.788496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.788722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.788756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.789064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.789094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.789316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.789336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.789520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.789539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.789770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.789791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.789992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.790022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.790346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.790377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.790593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.790635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.790952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.790983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.791214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.791246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.791624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.791660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.791951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.791983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.792266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.792297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.792544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.792574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.792841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.792873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.793158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.793188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.793411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.793441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.793675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.793707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.793939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.793970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.794200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.794219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.951 qpair failed and we were unable to recover it. 00:30:17.951 [2024-07-24 19:07:02.794532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.951 [2024-07-24 19:07:02.794563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.794818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.794857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.795091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.795121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.795461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.795491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.795787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.795819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.796047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.796078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.796337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.796357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.796651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.796671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.796839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.796871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.797169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.797200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.797508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.797527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.797790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.797834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.798182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.798213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.798557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.798589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.798819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.798851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.799158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.799196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.799488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.799520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.799836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.799868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.800088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.800119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.800403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.800434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.800784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.800815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.801132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.801162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.801397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.801428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.801648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.801680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.801996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.802027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.802323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.802354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.802668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.802701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.802987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.803018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.803336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.803356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.803586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.803613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.803832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.803852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.804083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.804103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.804366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.804386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.804578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.804598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.804919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.804939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.805199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.805236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.952 [2024-07-24 19:07:02.805553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.952 [2024-07-24 19:07:02.805584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.952 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.805922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.805955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.806271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.806290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.806592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.806635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.806946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.806977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.807270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.807306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.807623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.807655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.807893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.807924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.808245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.808288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.808497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.808517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.808832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.808865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.809049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.809080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.809314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.809345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.809643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.809675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.809987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.810018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.810360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.810391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.810708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.810740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.810960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.810990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.811289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.811308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.811636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.811669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.811963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.811994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.812304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.812335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.812637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.812669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.812953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.812984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.813302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.813333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.813652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.813685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.813903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.813935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.814227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.814258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.814565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.814585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.814903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.814935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.815165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.815196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.815485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.815516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.815784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.815835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.816067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.816099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.816319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.816340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.816645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.953 [2024-07-24 19:07:02.816679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.953 qpair failed and we were unable to recover it. 00:30:17.953 [2024-07-24 19:07:02.816970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.817000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.817234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.817266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.817572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.817592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.817826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.817846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.817970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.817990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.818256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.818286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.818636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.818669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.818984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.819015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.819193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.819225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.819513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.819550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.819810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.819843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.820093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.820124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.820417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.820448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.820757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.820790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.821093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.821124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.821432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.821452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.821654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.821675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.821969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.822000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.822259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.822290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.822534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.822565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.822889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.822921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.823146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.823177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.823470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.823507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.823699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.823733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.823971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.824003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.824297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.824317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.824551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.824571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.824812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.824845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.825160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.825191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.825416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.825436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.954 [2024-07-24 19:07:02.825729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.954 [2024-07-24 19:07:02.825750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.954 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.826015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.826035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.826173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.826193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.826452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.826473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.826777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.826809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.827127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.827159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.827467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.827500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.827804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.827836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.828141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.828172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.828400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.828420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.828617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.828639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.828932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.828951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.829158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.829178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.829456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.829488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.829744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.829777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.830105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.830136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.830403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.830435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.830729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.830762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.831074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.831106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.831397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.831434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.831750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.831782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.832040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.832071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.832364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.832384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.832578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.832598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.832872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.832892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.833191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.833222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.833510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.833550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.833765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.833785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.834040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.834071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.834298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.834330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.834619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.834652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.834875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.834914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.835174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.835206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.835432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.835465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.955 [2024-07-24 19:07:02.835776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.955 [2024-07-24 19:07:02.835809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.955 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.836107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.836138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.836310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.836340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.836633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.836666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.836984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.837016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.837249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.837280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.837574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.837630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.837848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.837880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.838182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.838214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.838460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.838481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.838718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.838739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.838991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.839023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.839325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.839350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.839598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.839625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.839893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.839912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.840216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.840249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.840570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.840610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.840927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.840959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.841276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.841307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.841615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.841651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.841950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.841982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.842303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.842335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.842664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.842698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.843019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.843052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.843345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.843377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.843700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.843734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.844058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.844089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.844337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.844369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.844623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.844657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.844897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.844929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.845182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.845204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.845421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.845441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.845656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.845689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.846015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.846058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.846296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.846317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.956 [2024-07-24 19:07:02.846528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.956 [2024-07-24 19:07:02.846548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.956 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.846838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.846860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.847184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.847216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.847518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.847551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.847831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.847863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.848187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.848218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.848451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.848471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.848700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.848721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.849040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.849085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.849390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.849422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.849743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.849776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.850106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.850137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.850462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.850495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.850788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.850821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.851138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.851170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.851483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.851515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.851865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.851898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.852219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.852257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.852494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.852515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.852742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.852763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.853085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.853117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.853290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.853321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.853638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.853672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.854022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.854055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.854376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.854395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.854601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.854630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.854926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.854964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.855186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.855218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.855460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.855492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.855734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.855767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.856089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.856121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.856435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.856467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.856825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.856859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.857160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.857192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.857433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.857454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.857723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.857745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.857953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.957 [2024-07-24 19:07:02.857973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.957 qpair failed and we were unable to recover it. 00:30:17.957 [2024-07-24 19:07:02.858277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.858309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.858630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.858662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.858984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.859027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.859356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.859388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.859634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.859666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.859907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.859939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.860198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.860219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.860445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.860465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.860775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.860808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.861127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.861159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.861484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.861518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.861761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.861794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.862094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.862127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.862432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.862466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.862780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.862813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.863058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.863079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.863378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.863399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.863595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.863624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.863926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.863957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.864284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.864317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.864586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.864634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.864969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.865001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.865382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.865414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.865676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.865709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.866038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.866071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.866427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.866461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.866722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.866755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.867111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.867144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.867411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.867443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.867682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.867715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.868023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.868055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.868360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.868392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.868723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.868756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.869033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.869065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.869255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.869275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.869554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.958 [2024-07-24 19:07:02.869585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.958 qpair failed and we were unable to recover it. 00:30:17.958 [2024-07-24 19:07:02.870022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.870044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.870256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.870276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.870415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.870436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.870731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.870753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.871027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.871060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.871283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.871315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.871652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.871675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.871955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.871976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.872336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.872367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.872665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.872699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.873023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.873056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.873395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.873416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.873558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.873578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.873749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.873771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.873912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.873932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.874247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.874279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.874628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.874662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.874891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.874924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.875186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.875219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.875563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.875584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.875887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.875920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.876249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.876281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.876612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.876634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.876927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.876970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.877312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.877350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.877686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.877721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.878053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.878085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.878408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.878440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.878791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.878825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.959 qpair failed and we were unable to recover it. 00:30:17.959 [2024-07-24 19:07:02.879129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.959 [2024-07-24 19:07:02.879161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.879433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.879465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.879689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.879723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.880092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.880124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.880354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.880387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.880710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.880743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.881071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.881103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.881434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.881467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.881783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.881817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.882115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.882149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.882485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.882518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.882841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.882874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.883205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.883238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.883554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.883588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.883949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.883981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.884232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.884252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.884545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.884565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.884912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.884934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.885240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.885272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.885519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.885551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.885907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.885940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.886201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.886234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.886475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.886496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.886784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.886807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.887005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.887026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.887332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.887364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.887599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.887640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.887993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.888025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.888204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.888237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.888559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.888591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.888928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.888961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.889287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.889320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.889649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.889682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.889982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.890014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.960 [2024-07-24 19:07:02.890260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.960 [2024-07-24 19:07:02.890292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.960 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.890634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.890678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.890912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.890944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.891202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.891233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.891550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.891583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.891904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.891939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.892222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.892254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.892490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.892523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.892768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.892802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.893023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.893055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.893407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.893440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.893680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.893701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.893997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.894028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.894381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.894413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.894578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.894600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.894894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.894927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.895175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.895206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.895469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.895501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.895853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.895886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.896208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.896241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.896569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.896611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.896874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.896907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.897245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.897278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.897627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.897661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.897916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.897948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.898244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.898282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.898515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.898546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.898860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.898893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.899135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.899167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.899526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.899557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.899817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.899850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.900111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.900144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.900492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.900524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.900829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.900862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.901162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.901201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.901430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.961 [2024-07-24 19:07:02.901451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.961 qpair failed and we were unable to recover it. 00:30:17.961 [2024-07-24 19:07:02.901667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.901700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.901972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.902019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.902311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.902344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.902653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.902687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.903015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.903047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.903385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.903422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.903750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.903784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.904110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.904143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.904414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.904435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.904664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.904685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.904886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.904908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.905238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.905259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.905566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.905597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.905951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.905984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.906302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.906334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.906632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.906665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.906983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.907016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.907318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.907350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.907664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.907698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.908011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.908043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.908361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.908393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.908695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.908728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.908972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.909005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.909351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.909372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.909693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.909726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.910036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.910068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.910383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.910416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.910743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.910777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.911106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.911138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.911464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.911497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.911819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.911853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.912153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.912185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.912520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.912542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.912759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.912780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.913082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.913102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.913312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.913333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.962 [2024-07-24 19:07:02.913547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.962 [2024-07-24 19:07:02.913567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.962 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.913848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.913869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.914154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.914175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.914448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.914469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.914769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.914790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.914996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.915017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.915214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.915261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.915558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.915591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.915842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.915875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.916217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.916241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.916370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.916390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:17.963 [2024-07-24 19:07:02.916690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.963 [2024-07-24 19:07:02.916712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:17.963 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.916964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.916986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.917314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.917335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.917673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.917695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.917997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.918018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.918265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.918286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.918596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.918631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.918877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.918899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.919203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.919223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.919532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.919554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.919896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.919918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.920238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.920259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.920503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.920524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.920801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.920823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.921108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.921128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.921413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.921434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.921574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.921595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.921929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.921950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.922188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.922209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.922504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.922523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.922761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.922782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.923003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.923024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.923229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.923250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.923481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.923501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.923785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.923807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.924082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.924103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.924402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.924423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.924726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.924747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.924982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.925002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.925318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.925351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.925670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.925703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.926003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.257 [2024-07-24 19:07:02.926035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.257 qpair failed and we were unable to recover it. 00:30:18.257 [2024-07-24 19:07:02.926360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.926392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.926718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.926752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.927085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.927117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.927431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.927463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.927768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.927800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.928053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.928085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.928254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.928278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.928581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.928611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.928919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.928940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.929206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.929238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.929598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.929627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.929927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.929948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.930225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.930246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.930521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.930554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.930905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.930938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.931184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.931216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.931475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.931496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.931798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.931832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.932161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.932194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.932518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.932551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.932901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.932935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.933261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.933293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.933624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.933657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.933985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.934018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.934321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.934354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.934680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.934713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.935036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.935069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.935345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.935378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.935701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.935735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.936037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.936070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.936382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.936413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.936717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.936750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.937059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.937091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.937434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.937468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.937693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.937727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.937964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.937997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.938314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.258 [2024-07-24 19:07:02.938347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.258 qpair failed and we were unable to recover it. 00:30:18.258 [2024-07-24 19:07:02.938692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.938713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.938922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.938943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.939132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.939153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.939398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.939418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.939688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.939709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.940030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.940050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.940273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.940293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.940452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.940485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.940743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.940776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.941108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.941147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.941473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.941505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.941736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.941772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.942135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.942168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.942432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.942464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.942808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.942841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.943166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.943199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.943459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.943491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.943843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.943877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.944176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.944209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.944515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.944537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.944840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.944862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.945196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.945228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.945549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.945582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.945904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.945937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.946166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.946198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.946524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.946556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.946902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.946935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.947164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.947197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.947566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.947598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.947968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.948002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.948242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.948275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.948644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.948678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.259 [2024-07-24 19:07:02.948980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.259 [2024-07-24 19:07:02.949013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.259 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.949369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.949400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.949720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.949741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.950013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.950057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.950364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.950398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.950697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.950719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.951009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.951042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.951373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.951405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.951712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.951734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.952040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.952060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.952207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.952228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.952532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.952565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.952907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.952942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.953269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.953301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.953626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.953660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.953985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.954018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.954313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.954334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.954645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.954688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.954996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.955029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.955355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.955387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.955729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.955763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.956063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.956095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.956405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.956450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.956612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.956634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.956834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.956854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.957083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.957104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.957397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.957418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.957695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.957717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.958065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.958098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.958326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.958359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.958683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.958716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.959035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.959067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.959390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.959423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.959728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.959762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.960102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.260 [2024-07-24 19:07:02.960123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.260 qpair failed and we were unable to recover it. 00:30:18.260 [2024-07-24 19:07:02.960346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.960367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.960615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.960637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.960834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.960855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.961102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.961122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.961320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.961339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.961540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.961579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.961898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.961931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.962178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.962198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.962445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.962465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.962742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.962775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.963086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.963120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.963370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.963401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.963700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.963721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.963968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.963988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.964265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.964286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.964558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.964578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.964810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.964831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.965093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.965114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.965352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.965373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.965581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.965610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.965826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.965847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.966100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.966120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.966342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.966366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.966594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.966624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.966857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.966879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.967157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.967190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.967553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.967585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.967919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.967953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.968282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.968314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.968640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.968674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.968914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.968946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.969278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.969310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.969639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.969672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.261 [2024-07-24 19:07:02.969999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.261 [2024-07-24 19:07:02.970031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.261 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.970355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.970389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.970623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.970656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.970964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.970997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.971305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.971338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.971562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.971593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.971909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.971929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.972200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.972220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.972547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.972568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.972887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.972908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.973220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.973240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.973564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.973584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.973901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.973922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.974119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.974140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.974448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.974469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.974764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.974786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.975031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.975063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.975432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.975464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.975795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.975829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.976091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.976124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.976454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.976498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.976791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.976824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.977075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.977106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.977347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.977381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.977627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.977669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.977940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.977980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.978211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.978242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.978541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.978574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.978963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.978996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.979295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.979333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.979641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.979675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.980003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.980034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.980292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.262 [2024-07-24 19:07:02.980324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.262 qpair failed and we were unable to recover it. 00:30:18.262 [2024-07-24 19:07:02.980569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.980589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.980949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.980970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.981246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.981278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.981632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.981665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.981928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.981961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.982260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.982292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.982614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.982648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.982890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.982923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.983200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.983243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.983550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.983572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.983877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.983899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.984097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.984118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.984421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.984453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.984754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.984788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.985137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.985169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.985410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.985443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.985636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.985668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.985999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.986031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.986358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.986390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.986617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.986638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.986934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.986955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.987263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.987295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.987621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.987656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.987820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.987853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.988186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.988217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.988520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.988551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.988900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.988942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.989270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.989302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.989546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.989579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.989820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.989852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.990132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.990164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.990405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.990452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.990765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.990799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.991134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.991166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.991492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.263 [2024-07-24 19:07:02.991525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.263 qpair failed and we were unable to recover it. 00:30:18.263 [2024-07-24 19:07:02.991796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.991821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.992127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.992152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.992456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.992477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.992793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.992826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.993146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.993180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.993353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.993384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.993703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.993724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.994032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.994064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.994384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.994416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.994740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.994763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.995073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.995105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.995348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.995381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.995608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.995630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.995958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.995997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.996335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.996368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.996668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.996691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.996908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.996942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.997236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.997269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.997515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.997548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.997888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.997923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.998223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.998256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.998577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.998621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.998923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.998955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.999301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.999333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.999527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.999559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:02.999869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:02.999890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:03.000191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:03.000213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:03.000544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:03.000575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:03.000919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:03.000957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:03.001282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:03.001315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:03.001641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:03.001675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:03.001842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:03.001874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:03.002197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:03.002241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:03.002442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:03.002463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:03.002664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:03.002686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.264 qpair failed and we were unable to recover it. 00:30:18.264 [2024-07-24 19:07:03.002962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.264 [2024-07-24 19:07:03.002982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.003276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.003298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.003495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.003516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.003730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.003764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.004064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.004097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.004411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.004443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.004763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.004796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.005129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.005162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.005489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.005523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.005746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.005780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.006036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.006069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.006429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.006462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.006786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.006807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.007006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.007027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.007303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.007341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.007695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.007727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.008049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.008082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.008408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.008439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.008765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.008798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.009106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.009138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.009470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.009504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.009741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.009776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.010128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.010161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.010390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.010422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.010747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.010780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.011040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.011072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.011321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.011353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.011648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.011670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.011890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.011910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.012112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.012133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.012414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.012435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.012725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.012759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.013092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.013124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.013449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.013489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.013807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.013841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.014082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.014114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.014434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.014466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.265 qpair failed and we were unable to recover it. 00:30:18.265 [2024-07-24 19:07:03.014791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.265 [2024-07-24 19:07:03.014825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.015162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.015194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.015496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.015528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.015775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.015809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.016159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.016191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.016437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.016470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.016699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.016732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.017024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.017072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.017313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.017345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.017678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.017699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.018009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.018042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.018295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.018327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.018656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.018678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.018987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.019019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.019339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.019370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.019659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.019692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.019992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.020024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.020338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.020371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.020674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.020707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.020938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.020970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.021283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.021315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.021644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.021678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.021962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.021994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.022345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.022377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.022699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.022721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.022943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.022963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.023205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.023226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.023569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.023601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.023934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.023967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.266 qpair failed and we were unable to recover it. 00:30:18.266 [2024-07-24 19:07:03.024205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.266 [2024-07-24 19:07:03.024226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.024429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.024462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.024816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.024850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.025151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.025173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2691199 Killed "${NVMF_APP[@]}" "$@" 00:30:18.267 [2024-07-24 19:07:03.025374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.025395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.025698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.025719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.025940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.025960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:18.267 [2024-07-24 19:07:03.026177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.026198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:18.267 [2024-07-24 19:07:03.026484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.026507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:18.267 [2024-07-24 19:07:03.026779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.026802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:18.267 [2024-07-24 19:07:03.027089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.027111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.267 [2024-07-24 19:07:03.027446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.027469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.027627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.027648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.027865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.027887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.028106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.028127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.028275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.028298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.028451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.028471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.028763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.028785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.029066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.029087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.029395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.029416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.029734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.029755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.030026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.030047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.030335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.030356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.030696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.030717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.031017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.031037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.031361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.031381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.267 [2024-07-24 19:07:03.031684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.267 [2024-07-24 19:07:03.031708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.267 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.031926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.031947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.032254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.032275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.032479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.032500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.032797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.032819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.033144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.033169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.033422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.033443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.033643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.033665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.033971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.033992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.034343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.034364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2692019 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.034651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.034673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2692019 00:30:18.268 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:18.268 [2024-07-24 19:07:03.034908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.034929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2692019 ']' 00:30:18.268 [2024-07-24 19:07:03.035229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.035251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.268 [2024-07-24 19:07:03.035550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.035572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.268 [2024-07-24 19:07:03.035842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.035864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.268 [2024-07-24 19:07:03.036155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.036178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.268 [2024-07-24 19:07:03.036451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.036473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 19:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.036775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.036796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.037087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.037108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.037319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.037339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.037633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.037655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.037874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.037896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.038095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.038116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.038463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.038484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.038714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.038736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.038970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.038992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.039116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.039135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.039416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.039442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.039732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.039753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.040025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.040045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.040251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.040273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.040511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.268 [2024-07-24 19:07:03.040532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.268 qpair failed and we were unable to recover it. 00:30:18.268 [2024-07-24 19:07:03.040776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.040796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.041074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.041095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.041432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.041453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.041670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.041692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.041970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.041993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.042227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.042249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.042503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.042525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.042806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.042827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.043129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.043151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.043317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.043339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.043641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.043664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.043863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.043883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.044082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.044103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.044403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.044424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.044745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.044766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.044982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.045003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.045251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.045274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.045547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.045568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.045827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.045848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.046089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.046110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.046401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.046423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.046751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.046773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.047055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.047077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.047302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.047323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.047480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.047502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.047777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.047799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.048097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.048118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.048348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.048369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.048531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.048554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.048855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.048877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.049157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.049179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.049492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.049514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.049824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.049845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.050094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.050114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.050355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.050376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.050675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.050700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.050924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.269 [2024-07-24 19:07:03.050944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.269 qpair failed and we were unable to recover it. 00:30:18.269 [2024-07-24 19:07:03.051247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.051268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.051591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.051629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.051928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.051950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.052156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.052178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.052470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.052491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.052754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.052777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.052997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.053019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.053326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.053348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.053654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.053676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.053953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.053974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.054125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.054146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.054387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.054408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.054627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.054650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.054893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.054914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.055121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.055142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.055455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.055476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.055781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.055804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.056078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.056100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.056319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.056340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.056622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.056644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.056856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.056877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.057155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.057176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.057377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.057398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.057599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.057628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.057922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.057943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.058175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.058196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.058361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.058382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.058594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.058626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.058851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.058871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.059149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.059170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.059390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.059411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.059691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.059712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.059986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.060007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.060224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.270 [2024-07-24 19:07:03.060245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.270 qpair failed and we were unable to recover it. 00:30:18.270 [2024-07-24 19:07:03.060469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.060489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.060784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.060805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.061137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.061157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.061372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.061393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.061623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.061648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.061950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.061970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.062130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.062150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.062372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.062393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.062697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.062718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.062962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.062983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.063214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.063235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.063563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.063583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.063890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.063911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.064111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.064132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.064452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.064473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.064715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.064736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.064955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.064976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.065221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.065242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.065458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.065479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.065776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.065798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.065996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.066017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.066158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.066179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.066331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.066351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.066550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.066570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.066807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.066829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.067047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.067068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.067290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.067310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.067552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.067573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.067749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.067770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.067983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.068005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.068171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.068192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.068340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.068359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.068630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.271 [2024-07-24 19:07:03.068652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.271 qpair failed and we were unable to recover it. 00:30:18.271 [2024-07-24 19:07:03.068865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.068886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.069168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.069189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.069409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.069430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.069714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.069735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.069949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.069970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.070195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.070215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.070415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.070436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.070644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.070665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.071015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.071036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.071328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.071348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.071677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.071698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.071944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.071968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.072180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.072200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.072485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.072505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.072731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.072752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.072992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.073013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.073303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.073324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.073472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.073492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.073780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.073801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.074010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.074030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.074251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.074271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.074600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.074639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.074867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.074888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.075092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.075112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.075324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.075344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.075665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.075688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.075929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.075949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.076247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.076267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.076435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.076454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.076668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.076689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.076990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.077011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.077165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.077186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.077314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.077335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.077477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.077496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.272 qpair failed and we were unable to recover it. 00:30:18.272 [2024-07-24 19:07:03.077724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.272 [2024-07-24 19:07:03.077744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.077944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.077963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.078196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.078217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.078422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.078442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.078714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.078734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.078941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.078961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.079118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.079139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.079337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.079359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.079556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.079576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.079818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.079838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.080049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.080068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.080341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.080362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.080511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.080530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.080830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.080852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.081083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.081103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.081328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.081349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.081542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.081563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.081746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.081786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.081993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.082013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.082210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.082232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.082399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.082420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.082733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.082754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.083055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.083076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.083288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.083308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.083597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.083634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.083930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.083950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.084158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.084179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.084422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.084442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.084713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.084734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.084882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.084902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.085029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.085050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.085254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.085274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.085514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.085535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.085734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.085754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.085950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.085971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.086174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.273 [2024-07-24 19:07:03.086194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.273 qpair failed and we were unable to recover it. 00:30:18.273 [2024-07-24 19:07:03.086411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.086433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 [2024-07-24 19:07:03.086427] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.086489] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.274 [2024-07-24 19:07:03.086683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.086708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.086921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.086938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.087100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.087119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.087322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.087342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.087554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.087574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.087878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.087898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.088096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.088117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.088355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.088376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.088516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.088537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.088749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.088771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.089040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.089061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.089296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.089317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.089530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.089551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.089888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.089911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.090204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.090225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.090567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.090587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.090800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.090821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.091058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.091077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.091356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.091376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.091611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.091632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.091844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.091865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.092022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.092044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.092342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.092363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.092492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.092512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.092734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.092754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.092948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.092970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.093190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.093211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.093444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.093466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.093668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.093690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.093888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.093909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.094139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.094160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.094352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.094372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.094553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.094577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.094792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.094813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.274 [2024-07-24 19:07:03.095009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.274 [2024-07-24 19:07:03.095030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.274 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.095296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.095316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.095535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.095555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.095717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.095738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.096001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.096022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.096230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.096251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.096550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.096569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.096825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.096847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.097044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.097064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.097267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.097288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.097449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.097470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.097682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.097703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.097857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.097877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.098085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.098105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.098223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.098244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.098461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.098481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.098624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.098646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.098912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.098933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.099087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.099108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.099300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.099320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.099626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.099647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.099913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.099933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.100204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.100225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.100436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.100457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.100595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.100622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.100934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.100955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.101160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.101180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.101398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.101419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.101552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.101573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.101810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.101831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.102025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.102045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.102339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.102359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.102600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.102628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.102785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.102805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.103103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.103123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.103292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.103312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.103553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.275 [2024-07-24 19:07:03.103573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.275 qpair failed and we were unable to recover it. 00:30:18.275 [2024-07-24 19:07:03.103792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.103812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.104108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.104132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.104335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.104355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.104579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.104599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.104770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.104792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.105001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.105021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.105313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.105334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.105632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.105653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.105978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.105999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.106221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.106241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.106455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.106476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.106746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.106767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.106959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.106980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.107183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.107203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.107477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.107498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.107796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.107817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.107969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.107990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.108255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.108275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.108565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.108585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.108882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.108903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.109094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.109114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.109259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.109279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.109474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.109495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.109690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.109711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.109977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.109997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.110143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.110163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.110302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.110322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.110524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.110544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.110681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.110701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.110844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.110865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.276 [2024-07-24 19:07:03.111040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.276 [2024-07-24 19:07:03.111061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.276 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.111354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.111375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.111567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.111587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.111794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.111815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.112018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.112038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.112334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.112353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.112548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.112569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.112876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.112898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.113105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.113124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.113420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.113439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.113593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.113622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.113860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.277 [2024-07-24 19:07:03.113880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.114095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.114115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.114282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.114302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.114461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.114481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.114576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.114595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.114739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.114759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.115053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.115074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.115268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.115287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.115502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.115522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.115658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.115679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.115887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.115907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.116195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.116215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.116418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.116437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.116646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.116666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.116936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.116955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.117178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.117198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.117343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.117363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.117509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.117529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.117736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.117756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.117953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.117972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.118106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.118126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.118389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.118409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.118542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.118562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.118854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.118874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.277 [2024-07-24 19:07:03.119084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.277 [2024-07-24 19:07:03.119103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.277 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.119314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.119334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.119539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.119559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.119779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.119800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.120078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.120098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.120301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.120321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.120511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.120530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.120742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.120762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.121080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.121100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.121307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.121327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.121527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.121546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.121751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.121771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.122056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.122075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.122289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.122309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.122460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.122480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.122717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.122738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.122883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.122907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.123119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.123138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.123276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.123296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.123573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.123593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.123896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.123916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.124065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.124084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.124332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.124352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.124491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.124511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.124853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.124874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.125086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.125106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.125247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.125267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.125533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.125553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.125826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.125847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.126053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.126073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.126366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.126386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.126677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.126698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.126904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.126924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.127126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.127146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.127456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.127476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.278 [2024-07-24 19:07:03.127786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.278 [2024-07-24 19:07:03.127807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.278 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.128069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.128089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.128415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.128435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.128647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.128668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.128954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.128974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.129129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.129148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.129357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.129377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.129614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.129635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.129836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.129856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.130116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.130136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.130345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.130365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.130625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.130646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.130858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.130878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.131115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.131135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.131422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.131442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.131674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.131694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.131957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.131977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.132273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.132293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.132520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.132540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.132731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.132752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.133034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.133055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.133261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.133284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.133478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.133498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.133788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.133809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.134020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.134040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.134294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.134315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.134610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.134631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.134892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.134913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.135206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.135226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.135516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.135536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.135683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.135703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.135995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.136014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.136309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.136329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.136551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.136571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.136848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.136869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.137073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.137093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.137380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.137400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.137684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.279 [2024-07-24 19:07:03.137705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.279 qpair failed and we were unable to recover it. 00:30:18.279 [2024-07-24 19:07:03.138037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.138057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.138319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.138339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.138618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.138638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.138926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.138946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.139230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.139250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.139504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.139524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.139811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.139831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.140026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.140046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.140245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.140265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.140479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.140499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.140689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.140710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.141051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.141071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.141275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.141295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.141616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.141637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.141946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.141965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.142200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.142221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.142479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.142498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.142762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.142782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.142914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.142933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.143131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.143150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.143434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.143453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.143711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.143732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.143940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.143960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.144185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.144209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.144437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.144456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.144755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.144775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.145038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.145058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.145330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.145350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.145637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.145657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.145914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.145934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.146213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.146233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.146494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.146514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.146723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.146743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.147034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.147054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.147188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.147208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.147349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.280 [2024-07-24 19:07:03.147369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.280 qpair failed and we were unable to recover it. 00:30:18.280 [2024-07-24 19:07:03.147660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.147680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.147999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.148019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.148209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.148229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.148511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.148531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.148791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.148812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.149107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.149128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.149362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.149381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.149640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.149661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.149955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.149976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.150246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.150266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.150471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.150491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.150785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.150806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.151012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.151032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.151320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.151340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.151630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.151651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.151972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.151991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.152224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.152244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.152481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.152501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.152726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.152746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.153036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.153056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.153269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.153289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.153476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.153495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.153752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.153773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.153991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.154010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.154297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.154317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.154584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.154612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.154867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.154887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.155145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.155168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.155460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.155479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.155766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.155786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.156015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.281 [2024-07-24 19:07:03.156034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.281 qpair failed and we were unable to recover it. 00:30:18.281 [2024-07-24 19:07:03.156321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.156341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.156599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.156624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.156833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.156853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.157111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.157131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.157438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.157458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.157770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.157790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.158047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.158067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.158256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.158275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.158474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.158493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.158778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.158798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.159076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.159096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.159330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.159350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.159544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.159563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.159750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.159770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.160057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.160077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.160280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.160301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.160570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.160590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.160879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.160899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.161177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.161196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.161482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.161502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.161701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.161721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.161998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.162017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.162221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.162240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.162451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.162471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.162696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.162717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.163017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.163037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.163349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.163368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.163502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.163522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.163806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.163826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.164161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.164180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.164380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.164399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.164613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.164633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.164942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.164962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.165221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.165240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.165375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.165394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.165678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.282 [2024-07-24 19:07:03.165698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.282 qpair failed and we were unable to recover it. 00:30:18.282 [2024-07-24 19:07:03.165983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.166005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.166282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.166302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.166558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.166577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.166870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.166889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.167178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.167197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.167418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.167438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.167757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.167777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.168037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.168057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.168254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.168273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.168531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.168550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.168820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.168840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.169125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.169144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.169343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.169362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.169642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.169663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.169926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.169946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.170134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.170153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.170438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.170458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.170742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.170762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.170953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.170973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.171182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.171201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.171559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.171579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.171872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.171892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.172178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.172197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.172478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.172498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.172831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.172851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.172990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.173009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.173234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.173254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.173545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.173564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.173771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.173791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.174015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.174034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.174231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.174251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.174579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.174598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.174803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.174823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.174958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.174978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.175254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.175274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.175558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.175578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.175804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.175824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.283 qpair failed and we were unable to recover it. 00:30:18.283 [2024-07-24 19:07:03.176134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.283 [2024-07-24 19:07:03.176153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.176338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.176357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.176560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.176579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.176868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.176889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.177155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.177175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.177445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.177464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.177747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.177767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.177979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.177998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.178265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.178286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.178507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.178527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.178797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.178817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.179107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.179127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.179330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.179349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.179623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.179643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.179925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.179944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.180230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.180250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.180451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.180471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.180746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.180767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.181047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.181067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.181353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.181373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.181652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.181672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.182007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.182026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.182291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.182310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.182616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.182636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.182854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.182874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.183130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.183150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.183426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.183446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.183729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.183750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.183981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.184001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.184289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.184308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.184614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.184638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.184951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.184971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.185205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.284 [2024-07-24 19:07:03.185225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.284 qpair failed and we were unable to recover it. 00:30:18.284 [2024-07-24 19:07:03.185523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.185542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.185731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.185752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.185954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.185974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.186161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.186180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.186369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.186388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.186575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.186595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.186796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.186816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.187020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.187039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.187311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.187330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.187526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.187546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.187803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.187823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.188083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.188103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.188321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.188340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.188625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.188645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.188926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.188946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.189224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.189243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.189447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.189467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.189691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.189712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.189912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.189931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.190201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.190221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.190432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.190451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.190730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.190750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.190968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.190987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.191252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.191271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.191482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.191503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.191764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.191784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.191980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.192000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.192187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.192208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.192463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.192484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.192779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.192800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.193113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.193133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.193359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.193379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.193597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.193624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.193828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.193848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.194167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.194186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.194495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.194515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.285 qpair failed and we were unable to recover it. 00:30:18.285 [2024-07-24 19:07:03.194822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.285 [2024-07-24 19:07:03.194843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.195008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:18.286 [2024-07-24 19:07:03.195152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.195172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.195425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.195445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.195760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.195780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.196004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.196024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.196245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.196265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.196497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.196516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.196796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.196816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.197014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.197034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.197267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.197286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.197598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.197624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.197812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.197833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.198111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.198130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.198316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.198336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.198524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.198548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.198855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.198876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.199074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.199093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.199402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.199421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.199551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.199571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.199863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.199884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.200182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.200202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.200520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.200539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.200849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.200869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.200991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.201011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.201323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.201343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.286 qpair failed and we were unable to recover it. 00:30:18.286 [2024-07-24 19:07:03.201658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.286 [2024-07-24 19:07:03.201678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.201989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.202008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.202311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.202331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.202536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.202556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.202814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.202835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.203090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.203110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.203310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.203330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.203559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.203579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.203847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.203867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.204133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.204154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.204434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.204454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.204741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.204762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.204981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.205001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.205238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.205258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.205485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.205504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.205792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.205813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.206144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.206164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.206460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.206480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.206764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.206784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.206987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.207007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.207287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.207307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.207504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.207524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.207811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.207833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.208119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.208139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.208280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.208301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.208505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.208524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.208788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.208807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.208993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.209013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.209317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.209336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.209644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.209668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.209924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.209944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.210133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.210152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.210462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.210482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.210744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.210764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.211046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.211066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.287 [2024-07-24 19:07:03.211343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.287 [2024-07-24 19:07:03.211362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.287 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.211554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.211573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.211863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.211883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.212092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.212111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.212380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.212399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.212535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.212554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.212843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.212864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.213097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.213116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.213405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.213425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.213750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.213771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.214007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.214027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.214211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.214230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.214516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.214535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.214765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.214785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.215046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.215066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.215343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.215363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.215632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.215653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.215934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.215954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.216237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.216257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.216481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.216500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.216807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.216828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.217039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.217059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.217344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.217364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.217633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.217654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.217862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.217881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.218097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.218117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.218330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.218350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.218541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.218561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.218821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.218841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.219146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.219166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.219394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.219414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.219736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.219756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.219965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.219984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.220290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.220309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.220565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.220588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.220797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.288 [2024-07-24 19:07:03.220816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.288 qpair failed and we were unable to recover it. 00:30:18.288 [2024-07-24 19:07:03.221102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.221121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.221338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.221357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.221617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.221637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.221924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.221943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.222224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.222243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.222575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.222595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.222823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.222843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.223041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.223060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.223249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.223269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.223519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.223539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.223849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.223869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.224058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.224078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.224341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.224361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.224619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.224639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.224838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.224858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.225057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.225076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.225273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.225291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.225503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.225523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.225731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.225750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.225939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.225958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.226175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.226194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.226402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.226422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.226675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.226695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.227005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.227024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.227305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.227325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.227582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.227612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.227899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.227919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.228109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.228128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.228316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.228335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.228617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.228637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.228921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.228941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.229272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.229292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.229579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.229598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.229889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.229909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.230174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.230193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.230384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.230403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.230713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.230733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.289 qpair failed and we were unable to recover it. 00:30:18.289 [2024-07-24 19:07:03.231021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.289 [2024-07-24 19:07:03.231040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.231321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.231344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.231671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.231692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.231981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.232000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.232285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.232305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.232528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.232548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.232859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.232879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.233084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.233103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.233390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.233409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.233598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.233622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.233904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.233923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.234156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.234176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.234460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.234479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.234811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.234831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.235037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.235056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.290 [2024-07-24 19:07:03.235343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.290 [2024-07-24 19:07:03.235363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.290 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.235587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.235620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.235909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.235929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.236242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.236262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.236517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.236537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.236723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.236743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.237053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.237073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.237341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.237361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.237559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.237578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.237876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.237896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.238175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.238194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.238502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.238521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.238742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.238762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.239136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.239212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.239550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.239585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.239845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.239877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.240161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.240192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.240527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.240548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.240807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.240826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.241050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.241069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.241325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.241345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.241632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.241652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.241836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.241856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.242112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.242133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.242427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.242448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.242706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.242725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.242927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.242950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.243146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.243166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.243421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.243441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.243728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.243748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.243880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.243899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.244133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.571 [2024-07-24 19:07:03.244152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.571 qpair failed and we were unable to recover it. 00:30:18.571 [2024-07-24 19:07:03.244436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.244455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.244783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.244804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.245013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.245033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.245269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.245288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.245545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.245565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.245838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.245858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.246061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.246080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.246342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.246362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.246651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.246671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.246885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.246905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.247095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.247114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.247401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.247421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.247621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.247641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.247840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.247860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.248130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.248149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.248303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.248322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.248551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.248570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.248870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.248890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.249119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.249139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.249395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.249414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.249632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.249652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.249859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.249879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.250174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.250193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.250455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.250475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.250746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.250766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.251051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.251071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.251354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.251374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.251665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.251685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.251895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.251915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.252139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.252159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.252361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.252380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.252655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.252675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.252956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.252976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.253258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.253278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.253493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.253515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.253740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.253760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.254017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.572 [2024-07-24 19:07:03.254036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.572 qpair failed and we were unable to recover it. 00:30:18.572 [2024-07-24 19:07:03.254251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.254271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.254553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.254572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.254780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.254800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.255065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.255084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.255308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.255328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.255548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.255567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.255827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.255848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.256125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.256145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.256413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.256432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.256715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.256735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.257017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.257036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.257372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.257392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.257610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.257630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.257887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.257907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.258130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.258150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.258410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.258429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.258708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.258729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.258986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.259005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.259193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.259212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.259521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.259540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.259772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.259792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.260047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.260066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.260349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.260368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.260497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.260516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.260853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.260873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.261097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.261116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.261402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.261422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.261656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.261676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.261876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.261895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.262168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.262187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.262472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.262491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.262775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.262795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.263017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.263036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.263232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.263251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.263461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.263480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.263675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.263694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.263979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.573 [2024-07-24 19:07:03.263998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.573 qpair failed and we were unable to recover it. 00:30:18.573 [2024-07-24 19:07:03.264307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.264329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.264640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.264661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.265001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.265021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.265162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.265182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.265404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.265424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.265711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.265730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.265951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.265972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.266227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.266247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.266446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.266465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.266671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.266692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.266975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.266996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.267271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.267292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.267579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.267600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.267934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.267955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.268221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.268241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.268453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.268474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.268785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.268806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.269066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.269087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.269320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.269339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.269615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.269636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.269974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.269995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.270132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.270152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.270407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.270428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.270646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.270668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.270974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.270994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.271280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.271300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.271530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.271551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.271818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.271840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.272042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.272063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.272206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.272227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.272446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.272466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.272724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.272745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.273001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.273021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.273245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.273265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.273387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.273406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.273617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.273638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.273922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.273942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.574 [2024-07-24 19:07:03.274207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.574 [2024-07-24 19:07:03.274228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.574 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.274434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.274453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.274709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.274731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.275014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.275039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.275321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.275342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.275550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.275570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.275787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.275810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.276078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.276099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.276415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.276436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.276736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.276758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.276956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.276976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.277234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.277256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.277457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.277479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.277598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.277624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.277881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.277901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.278192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.278214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.278531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.278552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.278849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.278871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.279181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.279202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.279391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.279410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.279616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.279637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.279895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.279915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.280146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.280166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.280459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.280478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.280745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.280765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.281055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.281074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.281352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.281371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.281582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.281607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.281874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.281893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.282146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.282166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.282473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.575 [2024-07-24 19:07:03.282493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.575 qpair failed and we were unable to recover it. 00:30:18.575 [2024-07-24 19:07:03.282813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.282833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.283141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.283160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.283469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.283488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.283705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.283725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.283945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.283964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.284278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.284298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.284525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.284544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.284824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.284844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.285124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.285143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.285292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.285312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.285612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.285632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.285817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.285836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.286123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.286145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.286328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.286347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.286609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.286629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.286991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.287010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.287214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.287233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.287488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.287507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.287706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.287726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.287920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.287939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.288154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.288174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.288454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.288474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.288758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.288778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.289013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.289033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.289288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.289307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.289489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.289509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.289823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.289844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.290130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.290150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.290371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.290390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.290686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.290706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.291003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.291022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.291243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.291263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.291519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.291539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.291815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.291834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.292028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.292046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.292334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.292355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.292637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.292657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.292980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.293000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.293288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.576 [2024-07-24 19:07:03.293308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.576 qpair failed and we were unable to recover it. 00:30:18.576 [2024-07-24 19:07:03.293598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.293624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.293767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.293786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.293983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.294002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.294268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.294288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.294474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.294494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.294752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.294773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.294975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.294995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.295192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.295212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.295470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.295489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.295784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.295804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.296061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.296080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.296359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.296379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.296662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.296682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.296962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.296985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.297315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.297334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.297520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.297539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.297807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.297826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.298022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.298042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.298329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.298350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.298565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.298584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.298881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.298901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.299101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.299120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.299392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.299411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.299600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.299627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.299818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.299838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.300093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.300113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.300415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.300435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.300721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.300741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.301067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.301086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.301320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.301339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.301617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.301637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.301771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.301790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.302050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.302070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.302356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.302376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.302597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.302630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.302839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.302859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.303125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.303144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.303277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.303297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.303578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.303597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.303907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.303926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.304145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.577 [2024-07-24 19:07:03.304167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.577 qpair failed and we were unable to recover it. 00:30:18.577 [2024-07-24 19:07:03.304451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.304470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.304728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.304748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.305035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.305055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.305310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.305329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.305535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.305554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.305697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.305717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.305980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.306000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.306222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.306242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.306517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.306536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.306820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.306840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.307035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.307054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.307286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.307306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.307595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.307619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.307830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.307851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.308118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.308137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.308337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.308357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.308635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.308657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.308845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.308864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.309156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.309175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.309460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.309479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.309742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.309762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.310033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.310053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.310312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.310332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.310628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.310649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.310934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.310953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.311181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.311200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.311479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.311498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.311800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.311820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.312084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.312104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.312407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.312426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.312754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.312775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.313061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.313081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.313364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.313383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.313709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.313729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.314016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.314035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.314317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.314337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.314543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.314572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.314896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.314916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.315159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.315179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.315392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.315415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.578 [2024-07-24 19:07:03.315611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.578 [2024-07-24 19:07:03.315630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.578 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.315914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.315933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.316216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.316236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.316523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.316542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.316873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.316893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.317151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.317171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.317449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.317468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.317751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.317771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.317991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.318011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.318274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.318294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.318512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.318531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.318721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.318741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.319031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.319051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.319331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.319351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.319630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.319651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.319934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.319954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.320213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.320232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.320508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.320527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.320716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.320736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.320940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.320960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.321243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.321262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.321490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.321509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.321819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.321839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.322031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.322050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.322334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.322353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.322638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.322658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.322792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.322812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.323014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.323033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.323230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.323250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.323453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.323473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.323757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.323777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.324052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.324072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.324305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.324324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.324580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.324600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.324809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.324829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.325142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.325161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.325393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.325412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.325695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.325715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.325848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.325866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.326056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.326079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.326414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.579 [2024-07-24 19:07:03.326434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.579 qpair failed and we were unable to recover it. 00:30:18.579 [2024-07-24 19:07:03.326692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.326712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.326928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.326948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.327236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.327255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.327489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.327509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.327717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.327737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.327946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.327966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.328296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.328316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.328573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.328592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.328881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.328901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.329138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.329157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.329471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.329491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.329800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.329821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.330056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.330076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.330284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.330303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.330527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.330546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.330864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.330885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.331076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.331096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.331322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.331341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.331552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.331572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.331770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.331790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.331989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.332008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.332290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.332311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.332514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.332533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.332844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.332864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.333067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.333087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.333350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.333369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.333628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.333648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.333886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.333906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.334135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.334155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.334443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.334463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.334678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.334699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.334899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.334919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.335115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.335134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.335428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.335448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.335706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.335749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.580 qpair failed and we were unable to recover it. 00:30:18.580 [2024-07-24 19:07:03.336049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.580 [2024-07-24 19:07:03.336070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.336372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.336391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.336610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.336630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.336918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.336942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.337225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.337245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.337574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.337594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.337830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.337851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.338123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.338143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.338430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.338450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.338730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.338751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.339034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.339053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.339335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.339355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.339616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.339636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.339899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.339918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.340188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.340208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.340447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.340466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.340724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.340744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.340963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.340982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.341273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.341292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.341582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.341608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.341836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.341855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.342060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.342080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.342341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.342361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.342650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.342671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.342957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.342977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.343303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.343323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.343522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.343542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.343829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.343849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.344056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.344076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.344331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.344351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.344642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.344663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.344943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.344964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.345261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.345281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.345469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.345488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.345755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.345776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.346040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.346059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.346347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.346367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.346523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.581 [2024-07-24 19:07:03.346586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.581 [2024-07-24 19:07:03.346617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.581 [2024-07-24 19:07:03.346637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.581 [2024-07-24 19:07:03.346653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.581 [2024-07-24 19:07:03.346656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.346676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.346814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:18.581 [2024-07-24 19:07:03.346952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.581 [2024-07-24 19:07:03.346971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-24 19:07:03.346929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:18.581 [2024-07-24 19:07:03.347041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:18.582 [2024-07-24 19:07:03.347046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:18.582 [2024-07-24 19:07:03.347208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.347227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.347522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.347542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.347862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.347882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.348020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.348041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.348248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.348268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.348475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.348494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.348702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.348722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.349019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.349038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.349368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.349388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.349650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.349670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.349915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.349935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.350219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.350238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.350524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.350544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.350881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.350902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.351198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.351221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.351483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.351502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.351689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.351709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.351968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.351988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.352188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.352208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.352418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.352437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.352723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.352743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.353088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.353109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.353318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.353338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.353525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.353545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.353829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.353850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.354126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.354146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.354412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.354432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.354697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.354720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.355013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.355033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.355224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.355252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.355533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.355553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.355864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.355886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.356136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.356157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.356452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.356472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.356695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.356716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.356920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.356939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.357179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.357198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.357427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.357448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.357649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.357670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.357942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.582 [2024-07-24 19:07:03.357961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-24 19:07:03.358229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.358249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.358517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.358538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.358824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.358844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.359104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.359124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.359320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.359340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.359566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.359586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.359814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.359835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.360040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.360059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.360264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.360285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.360585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.360612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.360876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.360896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.361160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.361181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.361376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.361396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.361612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.361633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.361767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.361790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.362103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.362123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.362434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.362454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.362659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.362680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.362873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.362893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.363029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.363048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.363337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.363356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.363636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.363658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.363940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.363961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.364167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.364187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.364467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.364488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.364685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.364706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.364920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.364941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.365174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.365194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.365499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.365520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.365866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.365888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.366205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.366227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.366429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.366450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.366663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.366685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.366913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.366934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.367237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.367258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.367401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.367423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.367659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.367681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.367970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.367991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.368217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.368237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.368445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.368466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.368759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.583 [2024-07-24 19:07:03.368781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-24 19:07:03.369050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.369071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.369361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.369383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.369646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.369668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.369990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.370011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.370204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.370224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.370426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.370447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.370654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.370676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.370969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.370991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.371309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.371330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.371629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.371651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.371864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.371886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.372078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.372099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.372394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.372415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.372634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.372660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.372954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.372974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.373304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.373325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.373592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.373621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.373916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.373938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.374279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.374301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.374527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.374548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.374830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.374852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.375176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.375197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.375424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.375444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.375733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.375755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.376026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.376046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.376312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.376333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.376593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.376639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.376784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.376805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.377121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.377142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.377405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.377426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.377726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.377748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.378067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.378089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.378285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.378306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.378637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.378660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.378856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.584 [2024-07-24 19:07:03.378878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.584 qpair failed and we were unable to recover it. 00:30:18.584 [2024-07-24 19:07:03.379083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.379103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.379368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.379389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.379690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.379712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.380034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.380055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.380349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.380369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.380692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.380714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.380979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.381001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.381296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.381317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.381639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.381663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.381868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.381889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.382152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.382172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.382463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.382484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.382676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.382698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.382913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.382933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.383225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.383247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.383563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.383583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.383891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.383912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.384110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.384130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.384403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.384425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.384633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.384655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.384894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.384914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.385107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.385128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.385392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.385413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.385709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.385730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.385944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.385964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.386278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.386299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.386618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.386641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.386941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.386962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.387172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.387191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.387384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.387405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.387694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.387716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.387999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.388036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.388243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.388263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.388564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.388585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.388846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.388866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.389152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.389173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.389466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.389486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.389762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.389782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.389990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.390011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.390303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.390323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.585 qpair failed and we were unable to recover it. 00:30:18.585 [2024-07-24 19:07:03.390616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.585 [2024-07-24 19:07:03.390637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.390956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.390976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.391211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.391230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.391422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.391442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.391648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.391669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.391967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.391991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.392113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.392133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.392348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.392372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.392667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.392690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.392953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.392973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.393248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.393268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.393467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.393487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.393754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.393775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.394064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.394084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.394298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.394318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.394453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.394473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.394679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.394699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.394964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.394984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.395244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.395264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.395569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.395590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.395816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.395837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.396103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.396125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.396340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.396361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.396593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.396625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.396855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.396875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.397167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.397190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.397387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.397407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.397687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.397708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.397963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.397984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.398244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.398265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.398565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.398586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.398828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.398850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.399059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.399079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.399365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.399386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.399665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.399686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.399994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.400015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.400276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.400296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.400487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.400507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.400737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.400759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.400990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.401010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.401305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.586 [2024-07-24 19:07:03.401325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.586 qpair failed and we were unable to recover it. 00:30:18.586 [2024-07-24 19:07:03.401524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.401546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.401822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.401844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.402040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.402062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.402278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.402299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.402509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.402537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.402824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.402844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.403039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.403059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.403251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.403271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.403534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.403553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.403852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.403873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.404192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.404212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.404503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.404522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.404666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.404686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.405003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.405023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.405258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.405278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.405564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.405585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.405996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.406083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.406397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.406468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.406813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.406852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.407155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.407187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.407443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.407466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.407671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.407691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.407978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.407998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.408280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.408300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.408524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.408543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.408826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.408846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.409156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.409176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.409474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.409493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.409750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.409770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.409981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.410000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.410213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.410234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.410465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.410484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.410753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.410773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.411042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.411061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.411265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.411285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.411574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.411594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.411806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.411826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.412090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.412110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.412400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.412419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.412694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.412714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.413001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.587 [2024-07-24 19:07:03.413020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.587 qpair failed and we were unable to recover it. 00:30:18.587 [2024-07-24 19:07:03.413235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.413254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.413541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.413561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.413832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.413852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.414058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.414081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.414363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.414383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.414588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.414614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.414885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.414904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.415225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.415245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.415507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.415527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.415815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.415836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.416071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.416091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.416283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.416304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.416589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.416618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.416923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.416942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.417202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.417221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.417490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.417510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.417742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.417762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.418030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.418050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.418190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.418211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.418501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.418520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.418736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.418757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.419042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.419062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.419265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.419286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.419560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.419580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.419867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.419887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.420173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.420192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.420469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.420488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.420721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.420740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.420943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.420962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.421165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.421185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.421474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.421494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.421684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.421704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.421909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.421928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.422186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.422206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.422496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.422515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.588 qpair failed and we were unable to recover it. 00:30:18.588 [2024-07-24 19:07:03.422730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.588 [2024-07-24 19:07:03.422749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.422952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.422971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.423281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.423300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.423487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.423506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.423794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.423815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.424071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.424091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.424374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.424393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.424632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.424653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.424803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.424827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.425086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.425106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.425373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.425392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.425527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.425547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.425805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.425827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.426086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.426107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.426344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.426364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.426634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.426655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.426931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.426951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.427234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.427255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.427545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.427565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.427772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.427793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.428068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.428089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.428385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.428404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.428719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.428740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.429011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.429031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.429312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.429332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.429619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.429640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.429939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.429959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.430222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.430242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.430442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.430462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.430718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.430739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.430998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.431018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.431144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.431164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.431371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.431393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.431523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.431543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.431730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.431751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.432042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.432062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.432346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.432366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.432667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.432688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.432970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.432991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.433207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.433227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.433484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.589 [2024-07-24 19:07:03.433506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.589 qpair failed and we were unable to recover it. 00:30:18.589 [2024-07-24 19:07:03.433712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.433733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.433931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.433951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.434209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.434229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.434368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.434390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.434676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.434699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.434984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.435005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.435275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.435295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.435502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.435526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.435782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.435802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.436083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.436102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.436358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.436377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.436610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.436631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.436912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.436931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.437229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.437248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.437545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.437565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.437757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.437776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.437966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.437985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.438268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.438288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.438574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.438594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.438835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.438854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.439139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.439159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.439446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.439466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.439668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.439689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.439972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.439999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.440155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.440174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.440392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.440412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.440636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.440655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.440977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.440996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.441252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.441272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.441486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.441505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.441764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.441784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.442002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.442022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.442301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.442321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.442600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.442627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.442937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.442957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.443268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.443288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.443514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.443533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.443758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.443778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.444018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.444037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.444317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.444336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.590 [2024-07-24 19:07:03.444614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.590 [2024-07-24 19:07:03.444634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.590 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.444914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.444934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.445190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.445209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.445431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.445450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.445652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.445671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.445928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.445948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.446066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.446086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.446368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.446391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.446649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.446669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.446889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.446909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.447116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.447136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.447406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.447426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.447699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.447719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.447980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.448000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.448161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.448181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.448461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.448481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.448794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.448815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.449132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.449151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.449461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.449480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.449676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.449696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.449975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.449995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.450273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.450294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.450577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.450597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.450891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.450910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.451181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.451200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.451462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.451481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.451617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.451637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.451920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.451940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.452150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.452169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.452439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.452459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.452718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.452738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.452963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.452982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.453261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.453280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.453569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.453588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.453918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.453937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.454196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.454215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.454409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.454429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.454713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.454732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.454987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.455007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.455288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.455307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.455444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.455463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.591 qpair failed and we were unable to recover it. 00:30:18.591 [2024-07-24 19:07:03.455747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.591 [2024-07-24 19:07:03.455767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.456065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.456084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.456398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.456418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.456630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.456650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.456939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.456958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.457157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.457177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.457303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.457325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.457610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.457629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.457863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.457882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.458190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.458209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.458345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.458364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.458640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.458659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.458946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.458965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.459241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.459260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.459549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.459568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.459858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.459878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.460095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.460114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.460341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.460360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.460549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.460568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.460772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.460792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.461007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.461028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.461235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.461255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.461537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.461557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.461816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.461836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.462117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.462136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.462390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.462410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.462693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.462714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.462982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.463001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.463208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.463227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.463459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.463479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.463719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.463739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.464022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.464042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.464185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.464204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.464493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.464512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.464776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.464796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.465079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.465097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.465385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.465404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.592 qpair failed and we were unable to recover it. 00:30:18.592 [2024-07-24 19:07:03.465649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.592 [2024-07-24 19:07:03.465669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.465926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.465944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.466178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.466197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.466426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.466445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.466648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.466668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.466931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.466950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.467105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.467125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.467329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.467348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.467633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.467653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.467840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.467864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.468150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.468169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.468501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.468521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.468810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.468831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.469121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.469142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.469409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.469428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.469564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.469584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.469875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.469895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.470053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.470073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.470270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.470289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.470495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.470515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.470746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.470766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.471068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.471087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.471287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.471306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.471590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.471618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.471935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.471955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.472219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.472239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.472538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.472557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.472746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.472766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.472907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.472927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.473124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.473144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.473344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.473364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.473623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.473643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.473850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.473870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.474086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.474105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.474320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.474339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.474651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.474671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.474878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.474898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.475187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.475206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.475488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.475508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.593 [2024-07-24 19:07:03.475784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.593 [2024-07-24 19:07:03.475804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.593 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.475960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.475979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.476268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.476288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.476476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.476496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.476630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.476650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.476878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.476897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.477126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.477145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.477455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.477475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.477788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.477808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.478003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.478023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.478280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.478303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.478493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.478512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.478711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.478732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.478954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.478973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.479276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.479296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.479555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.479575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.479886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.479906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.480179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.480198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.480453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.480472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.480725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.480745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.481004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.481023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.481308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.481328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.481523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.481542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.481824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.481844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.482140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.482160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.482431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.482450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.482706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.482726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.482984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.483004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.483210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.483229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.483516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.483535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.483761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.483781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.483986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.484006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.484125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.484144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.484334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.484353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.484616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.484636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.484869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.484888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.485159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.485179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.485530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.485550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.485756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.485776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.486057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.486076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.594 [2024-07-24 19:07:03.486387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.594 [2024-07-24 19:07:03.486406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.594 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.486616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.486636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.486867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.486885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.487194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.487213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.487516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.487535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.487723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.487743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.488056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.488076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.488370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.488390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.488671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.488691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.488945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.488964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.489254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.489276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.489493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.489512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.489739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.489759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.490042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.490061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.490300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.490319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.490609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.490629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.490760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.490780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.491003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.491022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.491333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.491353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.491579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.491599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.491811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.491831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.492162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.492181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.492445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.492464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.492740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.492762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.493053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.493073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.493358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.493377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.493519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.493539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.493757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.493777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.493917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.493937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.494235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.494254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.494464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.494485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.494703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.494723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.494980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.494999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.495139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.495159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.495366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.495385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.495652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.495672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.495892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.495912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.496198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.496218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.496477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.496497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.496704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.496724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.496979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.496999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.595 [2024-07-24 19:07:03.497207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.595 [2024-07-24 19:07:03.497227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.595 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.497485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.497505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.497620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.497641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.497846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.497866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.498157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.498177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.498435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.498455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.498736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.498756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.499042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.499061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.499282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.499301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.499663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.499683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.499825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.499845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.500100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.500120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.500372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.500392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.500731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.500751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.501043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.501062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.501270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.501288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.501571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.501590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.501795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.501815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.502035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.502055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.502306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.502326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.502613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.502633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.502774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.502793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.502943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.502963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.503275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.503296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.503497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.503516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.503716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.503737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.504005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.504025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.504344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.504364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.504671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.504694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.504949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.504969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.505303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.505323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.505556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.505574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.505853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.505873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.506022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.506041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.506273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.506294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.506490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.506509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.506773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.506797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.506935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.506954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.507186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.507205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.507492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.507512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.507711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.507732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.596 [2024-07-24 19:07:03.507992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.596 [2024-07-24 19:07:03.508012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.596 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.508148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.508167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.508461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.508481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.508685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.508705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.508915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.508934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.509164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.509183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.509500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.509520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.509660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.509679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.509827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.509847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.510107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.510126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.510461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.510480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.510671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.510691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.510900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.510919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.511130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.511150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.511370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.511389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.511649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.511669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.511857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.511876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.512086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.512105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.512368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.512387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.512573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.512592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.512802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.512821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.513022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.513041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.513303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.513323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.513547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.513566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.513798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.513818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.514004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.514024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.514182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.514201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.514426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.514447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.514763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.514784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.515022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.515042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.515175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.515196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.515502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.515521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.515725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.515745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.515939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.515959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.516209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.516228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.516520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.597 [2024-07-24 19:07:03.516543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.597 qpair failed and we were unable to recover it. 00:30:18.597 [2024-07-24 19:07:03.516832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.516852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.517057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.517076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.517283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.517302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.517503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.517523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.517714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.517734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.518016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.518036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.518236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.518255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.518531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.518551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.518686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.518707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.518933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.518952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.519208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.519228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.519421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.519440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.519750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.519769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.519912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.519931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.520161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.520180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.520383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.520402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.520734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.520754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.521056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.521075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.521381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.521400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.521634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.521654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.521914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.521933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.522063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.522082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.522339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.522359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.522555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.522574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.522869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.522889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.523158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.523177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.523443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.523463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.523665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.523688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.523895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.523915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.524173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.524193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.524424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.524443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.524652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.524672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.525007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.525028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.525217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.525236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.525492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.525511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.525698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.525718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.526027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.526047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.526304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.526323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.526638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.526658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.526970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.598 [2024-07-24 19:07:03.526993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.598 qpair failed and we were unable to recover it. 00:30:18.598 [2024-07-24 19:07:03.527249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.527269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.527570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.527590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.527827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.527847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.528049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.528069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.528346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.528365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.528628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.528649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.528852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.528872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.529013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.529032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.529176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.529195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.529486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.529505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.529797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.529818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.530028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.530047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.530245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.530264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.530549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.530569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.530737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.530757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.531039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.531058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.531326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.531345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.531485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.531504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.531794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.531814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.532092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.532112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.532314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.532334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.532623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.532644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.532913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.532932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.533193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.533212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.533358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.533378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.533669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.533689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.533889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.533909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.534180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.534199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.534397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.534417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.534569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.534588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.534811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.534831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.534982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.535002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.535259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.535278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.535484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.535503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.535791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.535812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.536021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.536041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.536309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.536328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.536623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.536642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.536806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.536825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.537026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.537049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.599 [2024-07-24 19:07:03.537244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.599 [2024-07-24 19:07:03.537263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.599 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.537546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.537565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.537729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.537749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.537977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.537997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.538211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.538230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.538437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.538455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.538817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.538837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.539041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.539060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.539357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.539377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.539583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.539609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.539758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.539777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.540041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.540061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.540274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.540293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.540610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.540631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.540781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.540800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.541007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.541026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.541321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.541341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.541622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.541642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.541852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.541872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.542120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.542139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.542376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.542396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.542624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.542646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.542840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.542860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.543147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.543167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.543366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.543385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.543583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.543609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.543965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.543984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.544195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.544215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.544330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.544349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.544564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.544583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.544883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.544903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.545212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.545233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.545486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.545506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.545820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.545840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.545990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.546009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.546156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.546176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.546393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.546414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.546626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.546646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.546925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.546945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.547173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.547196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.547455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.547475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.600 [2024-07-24 19:07:03.547776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.600 [2024-07-24 19:07:03.547796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.600 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.548071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.548091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.548239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.548258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.548468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.548486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.548678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.548698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.548892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.548912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.549040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.549059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.549207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.549226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.549509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.549529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.549653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.549673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.549868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.549888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.550182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.550201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.550471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.550492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.550788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.550808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.551005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.551025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.551299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.551319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.551578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.551597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.551831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.551851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.552136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.552154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.552423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.552443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.552641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.552661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.552862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.552881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.553165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.553185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.553442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.553462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.553581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.553601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.553866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.553886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.554090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.554109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.554368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.554388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.554589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.554618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.554878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.554898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.555178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.555197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.555428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.555447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.555785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.555805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.556033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.556053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.556285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.556304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.601 [2024-07-24 19:07:03.556506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.601 [2024-07-24 19:07:03.556525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.601 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 19:07:03.556725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 19:07:03.556745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 19:07:03.557008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 19:07:03.557027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 19:07:03.557303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 19:07:03.557325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 19:07:03.557628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 19:07:03.557648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 19:07:03.557936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 19:07:03.557955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 19:07:03.558264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 19:07:03.558284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 19:07:03.558598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 19:07:03.558625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 19:07:03.558775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 19:07:03.558795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 19:07:03.559106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 19:07:03.559126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.602 [2024-07-24 19:07:03.559344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.602 [2024-07-24 19:07:03.559364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.602 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.559559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.559578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.559734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.559754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.559965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.559985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.560282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.560302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.560489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.560508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.560720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.560739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.560954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.560974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.561114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.561133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.561369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.561388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.561646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.561665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.561870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.561889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.562089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.562108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.562329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.562349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.562549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.562569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.562878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.562898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.563052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.563071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.563329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.563348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.563593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.563620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.563831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.563851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.564043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.564062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.564266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.564285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.564539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.564558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.564698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.564718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.564928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.564948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.565206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.565226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.882 qpair failed and we were unable to recover it. 00:30:18.882 [2024-07-24 19:07:03.565456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.882 [2024-07-24 19:07:03.565475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.565701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.565721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.565978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.565998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.566188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.566208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.566498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.566517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.566725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.566745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.566954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.566973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.567170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.567192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.567338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.567358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.567568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.567587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.567884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.567905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.568105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.568124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.568336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.568355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.568650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.568669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.568820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.568839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.569036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.569055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.569252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.569271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.569473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.569492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.569689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.569709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.569911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.569931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.570118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.570138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.570357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.570376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.570578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.570597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.570776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.570796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.571011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.571030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.571192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.571212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.571409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.571429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.571719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.571740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.571980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.572000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.572210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.572230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.572525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.572544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.572742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.572762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.572902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.572921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.573222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.573242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.573443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.573463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.573731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.573750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.573961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.573980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.883 [2024-07-24 19:07:03.574133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.883 [2024-07-24 19:07:03.574153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.883 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.574359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.574378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.574515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.574535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.574800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.574820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.575028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.575046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.575174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.575194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.575482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.575501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.575654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.575674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.575873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.575892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.576079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.576099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.576357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.576380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.576577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.576596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.576863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.576883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.577091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.577110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.577359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.577379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.577564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.577584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.577753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.577773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.577996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.578016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.578223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.578243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.578497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.578517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.578720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.578740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.578945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.578965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.579196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.579215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.579418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.579437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.579683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.579704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.579853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.579873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.580066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.580086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.580399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.580418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.580686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.580705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.580912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.580931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.581141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.581161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.581372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.581393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.581650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.581670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.581812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.581832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.581980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.581999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.582241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.582261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.582476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.582496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.582750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.884 [2024-07-24 19:07:03.582770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.884 qpair failed and we were unable to recover it. 00:30:18.884 [2024-07-24 19:07:03.582976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.582995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.583195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.583215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.583434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.583453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.583735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.583755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.583914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.583933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.584142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.584162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.584420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.584439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.584628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.584647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.584931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.584950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.585106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.585126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.585427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.585446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.585767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.585788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.585992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.586015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.586262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.586282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.586431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.586450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.586589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.586628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.586895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.586915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.587144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.587164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.587467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.587487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.587730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.587751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.587983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.588003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.588156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.588176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.588491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.588511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.588771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.588791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.588977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.588997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.589195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.589216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.589439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.589459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.589791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.589811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.590107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.590127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.590367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.590386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.590576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.590595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.590885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.590905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.591161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.591181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.591318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.591337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.591619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.591639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.591769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.591789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.592097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.592116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.885 qpair failed and we were unable to recover it. 00:30:18.885 [2024-07-24 19:07:03.592380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.885 [2024-07-24 19:07:03.592399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.592618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.592638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.592691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x55de80 (9): Bad file descriptor 00:30:18.886 [2024-07-24 19:07:03.593104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.593175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.593390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.593457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.593811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.593882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.594194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.594216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.594361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.594381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.594663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.594684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.594889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.594908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.595188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.595207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.595408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.595427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.595684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.595704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.596009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.596029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.596337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.596356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.596546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.596565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.596800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.596820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.596975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.596995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.597174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.597192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.597473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.597493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.597650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.597670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.597982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.598001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.598158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.598177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.598472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.598491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.598771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.598791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.599047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.599066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.599315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.599334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.599479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.599498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.599723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.599743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.599978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.600001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.600198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.600218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.600531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.600550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.886 qpair failed and we were unable to recover it. 00:30:18.886 [2024-07-24 19:07:03.600821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.886 [2024-07-24 19:07:03.600841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.601031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.601051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.601271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.601289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.601505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.601525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.601790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.601810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.602015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.602035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.602271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.602291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.602424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.602443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.602701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.602720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.602879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.602899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.603139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.603158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.603398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.603417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.603709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.603729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.603990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.604010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.604225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.604245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.604574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.604593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.604890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.604909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.605117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.605135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.605353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.605372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.605560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.605579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.605851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.605870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.606092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.606112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.606324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.606344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.606540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.606559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.606792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.606813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.606958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.606977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.607168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.607187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.607484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.607503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.607700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.607720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.607855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.607875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.608031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.608050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.608194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.608213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.608404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.608424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.608705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.608725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.609009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.609029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.609268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.609287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.609545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.887 [2024-07-24 19:07:03.609565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.887 qpair failed and we were unable to recover it. 00:30:18.887 [2024-07-24 19:07:03.609874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.609898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.610199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.610219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.610542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.610562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.610859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.610880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.611210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.611231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.611506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.611525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.611752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.611772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.612059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.612078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.612226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.612245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.612433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.612452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.612674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.612693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.612949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.612968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.613173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.613192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.613483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.613503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.613715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.613735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.613891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.613911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.614068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.614087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.614376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.614395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.614706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.614726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.614852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.614871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.615129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.615148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.615424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.615443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.615682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.615702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.615833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.615853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.616121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.616141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.616370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.616389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.616706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.616727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.616993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.617012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.617337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.617356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.617573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.617592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.617861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.617881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.618035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.618054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.618301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.618320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.618611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.618631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.618891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.618911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.619127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.619147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.619369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.619388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.619674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.888 [2024-07-24 19:07:03.619694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.888 qpair failed and we were unable to recover it. 00:30:18.888 [2024-07-24 19:07:03.619848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.619868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.620125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.620144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.620440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.620462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.620666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.620686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.620843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.620862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.621144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.621164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.621391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.621412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.621615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.621635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.621771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.621790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.622044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.622062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.622362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.622381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.622583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.622608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.622814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.622834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.623062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.623081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.623366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.623385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.623583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.623616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.623779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.623799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.624081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.624101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.624234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.624253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.624478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.624497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.624702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.624723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.624993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.625012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.625157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.625176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.625439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.625458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.625689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.625709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.625979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.625998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.626287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.626306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.626511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.626530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.626729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.626750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.626955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.626975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.627275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.627294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.627543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.627563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.627825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.627845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.628005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.628024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.628173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.628193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.628460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.628480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.628793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.628813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.889 [2024-07-24 19:07:03.629128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.889 [2024-07-24 19:07:03.629148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.889 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.629452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.629472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.629763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.629783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.629939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.629958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.630166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.630186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.630388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.630411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.630681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.630701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.630904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.630923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.631111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.631130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.631411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.631430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.631714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.631735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.631966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.631985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.632231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.632250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.632553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.632573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.632839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.632859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.633139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.633158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.633498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.633517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.633670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.633691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.633897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.633915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.634127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.634146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.634436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.634456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.634712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.634733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.634939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.634958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.635159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.635179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.635431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.635450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.635735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.635755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.635902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.635921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.636130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.636149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.636374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.636394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.636678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.636697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.636905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.636925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.637073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.637092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.637365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.637384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.637615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.637635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.637796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.637815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.638017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.638036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.638232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.638251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.638562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.638582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.890 qpair failed and we were unable to recover it. 00:30:18.890 [2024-07-24 19:07:03.638778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.890 [2024-07-24 19:07:03.638797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.639007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.639027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.639234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.639252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.639479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.639499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.639800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.639820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.639967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.639988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.640221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.640241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.640466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.640490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.640701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.640723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.641003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.641022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.641305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.641325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.641679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.641698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.641957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.641976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.642124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.642145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.642286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.642305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.642567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.642587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.642909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.642932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.643132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.643151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.643457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.643477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.643746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.643766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.644008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.644029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.644244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.644264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.644470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.644490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.644692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.644712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.644900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.644919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.645068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.645088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.645356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.645376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.645631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.645651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.645915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.645935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.646144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.646164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.646487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.646507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.646712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.646733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.891 [2024-07-24 19:07:03.646883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.891 [2024-07-24 19:07:03.646903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.891 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.647104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.647125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.647446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.647466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.647669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.647690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.647811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.647832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.647961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.647980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.648185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.648205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.648440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.648460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.648692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.648712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.648844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.648863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.648999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.649018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.649298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.649318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.649560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.649580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.649793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.649814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.650049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.650069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.650267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.650291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.650573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.650593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.650899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.650920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.651121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.651141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.651425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.651444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.651718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.651739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.651898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.651917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.652129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.652148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.652403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.652422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.652744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.652764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.652958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.652977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.653290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.653310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.653566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.653585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.653750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.653769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.654126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.654146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.654418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.654438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.654661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.654682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.654892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.654911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.655059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.655078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.655327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.655348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.655643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.655663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.655793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.655813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.656007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.892 [2024-07-24 19:07:03.656027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.892 qpair failed and we were unable to recover it. 00:30:18.892 [2024-07-24 19:07:03.656228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.656247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.656514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.656533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.656742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.656763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.656964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.656983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.657120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.657139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.657456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.657475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.657670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.657691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.657895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.657915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.658125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.658144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.658454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.658473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.658613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.658633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.658915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.658935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.659090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.659110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.659249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.659268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.659495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.659515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.659647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.659668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.659881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.659900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.660178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.660201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.660532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.660553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.660757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.660776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.660992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.661012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.661168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.661187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.661415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.661435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.661690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.661710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.661925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.661944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.662152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.662172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.662396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.662415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.662637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.662657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.662857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.662876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.663062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.663081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.663235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.663255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.663513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.663533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.663738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.663759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.663911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.663931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.664134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.664154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.664312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.664331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.664556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.664576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.893 [2024-07-24 19:07:03.664835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.893 [2024-07-24 19:07:03.664855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.893 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.665013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.665033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.665228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.665247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.665458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.665479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.665746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.665765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.666057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.666076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.666238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.666257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.666448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.666469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.666673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.666693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.666895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.666914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.667118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.667138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.667430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.667449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.667744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.667764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.667917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.667936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.668082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.668102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.668308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.668327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.668619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.668639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.668825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.668845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.668998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.669017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.669147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.669167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.669481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.669505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.669717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.669737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.669883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.669903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.670046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.670065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.670365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.670385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.670648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.670668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.670924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.670944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.671092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.671111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.671341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.671360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.671550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.671570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.671820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.671841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.671989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.672008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.672215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.672234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.672360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.672379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.672677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.672697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.672902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.672923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.673193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.673213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.673447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.673467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.894 [2024-07-24 19:07:03.673728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.894 [2024-07-24 19:07:03.673748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.894 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.673887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.673907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.674109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.674128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.674334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.674354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.674550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.674569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.674800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.674821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.674982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.675002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.675218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.675237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.675520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.675539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.675760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.675782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.675991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.676011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.676141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.676160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.676375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.676394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.676614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.676634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.676852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.676872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.677070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.677090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.677306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.677325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.677623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.677643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.677814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.677834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.678042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.678062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.678248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.678267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.678551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.678570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.678790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.678810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.679040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.679060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.679345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.679364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.679565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.679585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.679824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.679843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.680051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.680071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.680305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.680324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.680515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.680535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.680725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.680745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.680945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.680965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.681154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.681174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.681436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.681456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.681650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.681669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.681876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.681896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.682217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.682237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.682557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.682576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.895 qpair failed and we were unable to recover it. 00:30:18.895 [2024-07-24 19:07:03.682786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.895 [2024-07-24 19:07:03.682806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.682962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.682982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.683276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.683296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.683478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.683498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.683651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.683672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.683820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.683839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.684026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.684045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.684203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.684223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.684485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.684505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.684794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.684813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.685096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.685115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.685252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.685276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.685429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.685447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.685702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.685722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.685920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.685940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.686238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.686257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.686534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.686554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.686886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.686907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.687165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.687186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.687491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.687510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.687750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.687770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.688032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.688051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.688289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.688308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.688510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.688530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.688790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.688810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.689071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.689090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.689228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.689247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.689457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.689476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.689666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.689686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.689992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.690012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.690221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.690240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-07-24 19:07:03.690497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.896 [2024-07-24 19:07:03.690516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.690663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.690682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.690914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.690933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.691203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.691222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.691410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.691430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.691685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.691705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.691937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.691957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.692154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.692174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.692514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.692534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.692802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.692826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.692968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.692988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.693216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.693234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.693453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.693472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.693729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.693749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.693951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.693972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.694108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.694128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.694271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.694290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.694571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.694590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.694770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.694789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.695047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.695067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.695285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.695307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.695591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.695616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.695869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.695888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.696164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.696184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.696422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.696442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.696683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.696703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.696960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.696980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.697211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.697230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.697505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.697525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.697731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.697751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.697951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.697970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.698199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.698218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.698470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.698490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.698632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.698652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.698862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.698881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.699090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.699109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.699336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.699356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.699593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.699627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-07-24 19:07:03.699843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.897 [2024-07-24 19:07:03.699863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.700051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.700070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.700260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.700279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.700420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.700439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.700681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.700701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.700904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.700923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.701160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.701180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.701408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.701427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.701715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.701735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.701901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.701921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.702041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.702061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.702260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.702280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.702575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.702594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.702776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.702796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.703022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.703042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.703274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.703293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.703426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.703447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.703592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.703620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.703831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.703850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.704083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.704102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.704400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.704419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.704682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.704702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.704838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.704861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.705051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.705071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.705261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.705281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.705498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.705518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.705726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.705747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.705882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.705901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.706137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.706157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.706275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.706295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.706522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.706542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.706769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.706789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.707046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.707066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.707370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.707390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.707656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.707676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.707900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.707920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.708132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.708151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.708521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.898 [2024-07-24 19:07:03.708540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-07-24 19:07:03.708740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.708760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.708949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.708968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.709177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.709197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.709548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.709568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.709884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.709903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.710117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.710136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.710342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.710361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.710570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.710589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.710836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.710857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.711067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.711085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.711348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.711368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.711708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.711729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.711889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.711909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.712166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.712185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.712431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.712450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.712737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.712758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.712979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.712998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.713202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.713221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.713442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.713460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.713720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.713740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.713929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.713949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.714226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.714244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.714446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.714465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.714665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.714685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.714893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.714916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.715152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.715172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.715315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.715333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.715516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.715534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.715745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.715765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.715968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.715988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.716262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.716281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.716464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.716484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.716706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.716726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.716936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.716956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.717174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.717193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.717507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.717526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.717822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.717841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.718048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.899 [2024-07-24 19:07:03.718068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.899 qpair failed and we were unable to recover it. 00:30:18.899 [2024-07-24 19:07:03.718269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.718289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.718494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.718513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.718757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.718778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.718987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.719006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.719190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.719209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.719541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.719560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.719697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.719717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.719909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.719928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.720195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.720215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.720531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.720550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.720761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.720781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.720931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.720950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.721152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.721171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.721398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.721418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.721686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.721706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.721895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.721915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.722121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.722141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.722281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.722300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.722444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.722465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.722658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.722678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.722887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.722906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.723171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.723190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.723547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.723566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.723751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.723771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.723970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.723989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.724246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.724266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.724471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.724494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.724703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.724723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.724884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.724903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.725108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.725127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.725262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.725282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.725534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.900 [2024-07-24 19:07:03.725553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.900 qpair failed and we were unable to recover it. 00:30:18.900 [2024-07-24 19:07:03.725852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.725872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.726085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.726105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.726259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.726278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.726493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.726512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.726771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.726790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.727114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.727134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.727373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.727392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.727595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.727621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.727834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.727855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.728120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.728140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.728377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.728396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.728662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.728683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.728802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.728821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.729038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.729057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.729314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.729333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.729534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.729553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.729752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.729772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.730001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.730022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.730166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.730185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.730413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.730432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.730630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.730651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.730856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.730876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.731011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.731031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.731175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.731194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.731456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.731476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.731770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.731790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.731946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.731965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.732085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.732104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.732370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.732389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.732584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.732613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.732855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.732874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.733100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.733119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.733339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.733359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.733654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.733675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.901 [2024-07-24 19:07:03.733878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.901 [2024-07-24 19:07:03.733901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.901 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.734185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.734204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.734461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.734481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.734630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.734650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.734857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.734877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.735091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.735110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.735326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.735346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.735570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.735589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.735911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.735931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.736147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.736167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.736371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.736390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.736713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.736733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.736873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.736893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.737176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.737196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.737480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.737501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.737733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.737755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.737967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.737987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.738196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.738216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.738515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.738535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.738722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.738742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.738946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.738965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.739113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.739132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.739413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.739433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.739707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.739728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.739955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.739975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.740174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.740194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.740395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.740415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.740623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.740643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.740852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.740872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.741019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.741039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.741274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.902 [2024-07-24 19:07:03.741294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.902 qpair failed and we were unable to recover it. 00:30:18.902 [2024-07-24 19:07:03.741489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.741509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.741642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.741662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.741871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.741891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.742126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.742145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.742326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.742346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.742659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.742679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.742913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.742934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.743065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.743084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.743338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.743359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.743565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.743589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.743788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.743808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.744058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.744078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.744221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.744240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.744549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.744569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.744881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.744901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.745054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.745074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.745214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.745233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.745592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.745621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.745774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.745794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.746024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.746044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.746241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.746261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.746529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.746548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.746747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.746767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.747013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.747032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.903 qpair failed and we were unable to recover it. 00:30:18.903 [2024-07-24 19:07:03.747185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.903 [2024-07-24 19:07:03.747205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.747453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.747474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.747731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.747751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.747898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.747917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.748169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.748189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.748456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.748475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.748683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.748703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.748909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.748929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.749083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.749103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.749256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.749275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.749475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.749494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.749727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.749747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.749883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.749903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.750052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.750071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.750282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.750301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.750525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.750545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.750758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.750778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.750927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.750946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.751110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.751129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.751267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.751286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.751493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.751512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.751747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.751767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.751924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.751943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.752143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.752162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.752387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.752407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.752666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.752689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.752864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.752884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.753073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.904 [2024-07-24 19:07:03.753093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.904 qpair failed and we were unable to recover it. 00:30:18.904 [2024-07-24 19:07:03.753366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.753386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.753624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.753644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.753873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.753892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.754064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.754085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.754389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.754409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.754523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.754543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.754700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.754720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.754936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.754956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.755164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.755185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.755371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.755391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.755528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.755548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.755774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.755795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.755936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.755956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.756142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.756162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.756360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.756380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.756506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.756526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.756707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.756729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.756873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.756893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.757090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.757110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.757296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.757315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.757458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.757479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.757738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.757757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.757897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.757917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.758100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.758120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.758377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.758431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.758678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.758720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.758948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.758980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.759219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.759249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.759494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.759526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.759702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.905 [2024-07-24 19:07:03.759733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:18.905 qpair failed and we were unable to recover it. 00:30:18.905 [2024-07-24 19:07:03.759952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.759973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.760162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.760182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.760388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.760408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.760634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.760654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.760877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.760897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.761035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.761055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.761262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.761282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.761418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.761441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.761573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.761593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.761741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.761761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.761981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.762001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.762198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.762218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.762414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.762435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.762573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.762594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.762863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.762882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.763031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.763051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.763252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.763272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.763529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.763548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.906 [2024-07-24 19:07:03.763756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.906 [2024-07-24 19:07:03.763776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.906 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.763979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.763999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.764254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.764273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.764471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.764491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.764624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.764645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.764863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.764882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.765148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.765168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.765327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.765347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.765556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.765575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.765747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.765766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.765983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.766003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.766203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.766223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.766425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.766445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.766727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.766748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.766976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.766995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.767132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.767151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.767294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.767313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.767453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.767473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.767623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.767643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.767828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.767848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.767990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.768010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.768140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.768160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.768300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.768320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.768624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.768645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.768889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.768910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.769042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.769062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.769255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.769274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.769470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.769491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.769695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.769715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.769920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.769944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.770203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.770223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.770372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.770391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.770608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.907 [2024-07-24 19:07:03.770628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.907 qpair failed and we were unable to recover it. 00:30:18.907 [2024-07-24 19:07:03.770762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.770782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.770899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.770919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.771043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.771062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.771248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.771267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.771412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.771432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.771673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.771694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.771835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.771854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.771976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.771996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.772127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.772147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.772399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.772418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.772553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.772573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.772710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.772730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.772934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.772953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.773231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.773251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.773458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.773477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.773681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.773700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.773917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.773936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.774067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.774086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.774303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.774323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.774436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.774456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.774580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.774598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.774717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.908 [2024-07-24 19:07:03.774737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.908 qpair failed and we were unable to recover it. 00:30:18.908 [2024-07-24 19:07:03.774922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.774942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.775164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.775184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.775319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.775339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.775535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.775556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.775776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.775796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.775924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.775944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.776074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.776093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.776350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.776370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.776507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.776526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.776718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.776739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.776924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.776944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.777208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.777228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.777452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.777471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.777662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.777682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.777812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.777835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.777975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.777994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.778127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.778149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.778397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.778417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.778624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.778644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.778745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.778765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.778900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.778920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.779106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.779126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.779316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.779335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.779448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.779467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.779640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.779659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.779807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.779826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.780001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.780021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.780153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.780173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.780375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.780395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.780531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.780551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.780667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.780687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.780822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.780841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.909 qpair failed and we were unable to recover it. 00:30:18.909 [2024-07-24 19:07:03.781043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.909 [2024-07-24 19:07:03.781063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.781250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.781270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.781517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.781537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.781745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.781765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.781877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.781896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.782082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.782102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.782300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.782319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.782438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.782458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.782665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.782685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.782804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.782824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.783059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.783078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.783274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.783296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.783585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.783619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.783848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.783867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.784001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.784019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.784137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.784155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.784275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.784294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.784427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.784445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.784587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.784615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.784809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.784826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.785286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.785305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.785494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.785513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.785655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.785675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.785842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.785860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.785991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.786009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.786202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.786221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.786502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.786520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.786670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.786690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.786851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.786870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.787139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.787158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.787452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.787471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.787788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.787807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.787951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.787969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.788282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.788301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.788530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.788549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.788771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.788790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.789088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.910 [2024-07-24 19:07:03.789107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.910 qpair failed and we were unable to recover it. 00:30:18.910 [2024-07-24 19:07:03.789439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.789458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.789654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.789673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.789832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.789851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.790000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.790018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.790208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.790226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.790430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.790449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.790699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.790717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.790866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.790884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.791086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.791104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.791358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.791375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.791640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.791659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.791880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.791899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.792098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.792120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.792325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.792343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.792622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.792642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.792848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.792866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.793067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.793087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.793249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.793267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.793418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.793436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.793720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.793740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.793872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.793891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.794145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.794163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.794482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.794500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.794770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.794789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.795046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.795063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.795345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.795363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.795588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.795627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.795828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.795847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.796001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.796019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.796175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.796193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.796432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.796450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.796574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.796592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.796881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.796900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.797099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.797116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.797404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.797423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.797723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.797742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.797926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.797944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.911 [2024-07-24 19:07:03.798096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.911 [2024-07-24 19:07:03.798114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.911 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.798365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.798384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.798649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.798667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.798800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.798818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.799058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.799076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.799226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.799244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.799530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.799549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.799761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.799780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.799988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.800007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.800206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.800225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.800359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.800377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.800527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.800546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.800848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.800866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.801101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.801119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.801409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.801427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.801620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.801646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.801864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.801882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.802111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.802129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.802352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.802370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.802519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.802537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.802819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.802838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.803051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.803069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.803274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.803293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.803422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.803440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.803704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.803723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.803918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.803936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.804139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.804157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.804460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.804478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.804795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.804815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.805039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.805057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.805242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.805261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.912 [2024-07-24 19:07:03.805556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.912 [2024-07-24 19:07:03.805575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.912 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.805826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.805845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.805984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.806002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.806162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.806181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.806387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.806405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.806614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.806633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.806830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.806848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.806995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.807014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.807210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.807228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.807376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.807394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.807595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.807622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.807834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.807852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.808059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.808077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.808292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.808310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.808430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.808448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.808577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.808595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.808875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.808893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.809081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.809099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.809315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.809333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.809465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.809483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.809684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.809702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.809852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.809871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.810019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.810037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.810265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.810284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.810481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.810503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.812994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.813030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.813359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.813379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.813589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.813618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.813761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.813779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.813934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.813952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.814142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.814161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.814305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.814323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.814578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.814597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.814923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.814943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.815174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.815192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.815373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.815392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.815573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.815591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.815794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.913 [2024-07-24 19:07:03.815814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.913 qpair failed and we were unable to recover it. 00:30:18.913 [2024-07-24 19:07:03.815948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.815967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.816113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.816132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.816275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.816293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.816423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.816441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.816641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.816661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.816863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.816881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.816996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.817014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.817234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.817253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.817388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.817406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.817614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.817634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.817766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.817785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.817899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.817917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.818023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.818042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.818305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.818324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.818468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.818486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.818634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.818654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.818871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.818889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.819009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.819028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.819160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.819178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.819436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.819455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.819594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.819621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.819747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.819766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.820005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.820023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.820149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.820168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.820350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.820368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.820563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.820582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.820745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.820767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.820959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.820978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.821106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.821124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.821242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.821260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.821458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.821477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.821598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.821624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.821761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.821780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.821975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.821994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.822184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.822203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.822341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.822358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.822552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.822570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.914 [2024-07-24 19:07:03.822763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.914 [2024-07-24 19:07:03.822782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.914 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.822967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.822984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.823104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.823123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.823244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.823263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.823465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.823483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.823617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.823636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.823841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.823859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.823976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.823994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.824127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.824145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.824330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.824347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.824550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.824568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.824714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.824733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.824992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.825010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.825153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.825171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.825357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.825375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.825563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.825581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.825727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.825746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.825962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.825980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.826183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.826202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.826317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.826335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.826520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.826538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.826666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.826685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.826869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.826887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.827026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.827044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.827166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.827184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.827312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.827330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.827534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.827552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.827746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.827765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.827900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.827918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.828113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.828134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.828261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.828279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.828486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.828504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.828701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.828720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.828844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.828862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.828995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.829013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.829138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.829156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.829323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.829342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.829476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.829494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.915 qpair failed and we were unable to recover it. 00:30:18.915 [2024-07-24 19:07:03.829637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.915 [2024-07-24 19:07:03.829655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.829778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.829796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.829936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.829954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.830076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.830094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.830289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.830307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.830533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.830551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.830682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.830701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.830848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.830867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.830988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.831006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.831188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.831206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.831467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.831485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.831698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.831717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.831852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.831870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.832005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.832024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.832163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.832182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.832315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.832333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.832456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.832475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.832598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.832664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.832871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.832889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.833089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.833107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.833296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.833314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.833510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.833528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.833721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.833741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.833994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.834012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.834148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.834166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.834397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.834415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.834673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.834692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.834831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.834850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.835066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.835084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.835309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.835328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.835467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.835485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.835701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.835723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.916 qpair failed and we were unable to recover it. 00:30:18.916 [2024-07-24 19:07:03.835909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.916 [2024-07-24 19:07:03.835928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.836055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.836073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.836195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.836214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.836338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.836356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.836472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.836490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.836623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.836642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.836776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.836794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.836999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.837017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.837134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.837152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.837391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.837408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.837617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.837635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.837822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.837839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.838024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.838042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.838175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.838193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.838382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.838400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.838518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.838536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.838673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.838692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.838841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.838859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.839114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.839132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.839263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.839281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.839443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.839461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.839629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.839647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.839838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.839856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.839990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.840008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.840192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.840209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.840504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.840522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.840699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.840719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.840910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.840928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.841128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.841146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.841271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.841289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.841427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.841445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.841548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.841566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.841833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.841852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.842068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.842086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.842227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.842245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.842447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.842466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.842652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.917 [2024-07-24 19:07:03.842671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.917 qpair failed and we were unable to recover it. 00:30:18.917 [2024-07-24 19:07:03.842859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.842877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.843114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.843134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.843259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.843280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.843411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.843429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.843685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.843704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.843834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.843852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.843990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.844008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.844113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.844132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.844254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.844272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.844384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.844401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.844532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.844550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.844674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.844693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.844825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.844843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.845086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.845104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.845248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.845266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.845548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.845566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.845780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.845800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.845995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.846013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.846207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.846225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.846359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.846378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.846527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.846545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.846664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.846683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.846825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.846843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.847036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.847055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.847198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.847216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.847341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.847359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.847547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.847565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.847758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.847777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.847906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.847924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.848126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.848145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.848291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.848309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.848447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.848466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.848588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.848612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.848798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.848817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.849063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.849082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.849212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.849230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.849438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.849456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.849588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.918 [2024-07-24 19:07:03.849613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.918 qpair failed and we were unable to recover it. 00:30:18.918 [2024-07-24 19:07:03.849771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.849790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.849974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.849992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.850120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.850138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.850371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.850390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.850589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.850634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.850827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.850846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.850962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.850980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.851176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.851194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.851502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.851522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.851658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.851677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.851819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.851838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.852029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.852049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.852182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.852200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.852425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.852443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.852576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.852594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.852880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.852898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.853020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.853038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.853183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.853201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.853328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.853347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.853465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.853483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.853613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.853632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.853818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.853836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.854055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.854073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.854287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.854305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.854616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.854635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.854838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.854856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.855086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.855105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.855220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.855238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.855423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.855441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.855560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.855578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.855813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.855832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.919 [2024-07-24 19:07:03.856071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.919 [2024-07-24 19:07:03.856090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.919 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.856208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.856227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.856355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.856373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.856574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.856593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.856952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.856972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.857195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.857213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.857356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.857374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.857513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.857532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.857716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.857736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.857865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.857884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.858072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.858091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.858302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.858320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.858434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.858453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.858650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.858673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.858807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.858825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.859034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.859053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.859241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.859259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.859391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.859409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.859616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.859635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.859863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.859882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.860081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.860099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.860257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.860275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.860493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.860512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.860704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.860723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.860853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.860871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.861007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.861025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.861331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.861349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.861488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.861506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.861651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.861671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.861811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.861829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.862113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.862131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.862355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.862373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.862564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.862582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.862725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.862743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.862871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.862890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.863145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.863163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.920 [2024-07-24 19:07:03.863279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.920 [2024-07-24 19:07:03.863298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.920 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.863422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.863440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.863695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.863714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.863832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.863851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.863990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.864008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.864127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.864145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.864331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.864349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.864477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.864495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.864694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.864713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.864846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.864864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.865072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.865090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.865276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.865294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.865494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.865512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.865660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.865680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.865943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.865962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.866144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.866163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.866296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.866315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.866614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.866637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.866765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.866783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.866931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.866950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.867139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.867157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.867325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.867343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.867476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.867495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.867620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.867638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.867770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.867788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.867917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.867935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.868080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.868099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.868296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.868314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.868450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.868469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.868596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.868631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.868929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.868948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:18.921 [2024-07-24 19:07:03.869161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.921 [2024-07-24 19:07:03.869180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:18.921 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 19:07:03.869397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 19:07:03.869415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 19:07:03.869663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 19:07:03.869683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 19:07:03.869816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 19:07:03.869835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 19:07:03.870045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 19:07:03.870063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 19:07:03.870308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 19:07:03.870327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 19:07:03.870456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 19:07:03.870475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 19:07:03.870730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 19:07:03.870749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.200 qpair failed and we were unable to recover it. 00:30:19.200 [2024-07-24 19:07:03.870942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.200 [2024-07-24 19:07:03.870960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.871077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.871095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.871279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.871297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.871517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.871536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.871670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.871689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.871884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.871903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.872037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.872054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.872189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.872207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.872343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.872361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.872551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.872569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.872798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.872817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.873071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.873090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.873213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.873232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.873403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.873422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.873650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.873669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.873849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.873867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.874059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.874077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.874278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.874297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.874479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.874501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.874710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.874729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.874845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.874863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.874980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.874998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.875146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.875165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.875304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.875323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.875461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.875479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.875618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.875637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.875762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.875780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.875915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.875933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.876066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.876085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.876211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.876228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.876430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.876448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.876574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.876592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.876736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.876754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.876984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.201 [2024-07-24 19:07:03.877002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.201 qpair failed and we were unable to recover it. 00:30:19.201 [2024-07-24 19:07:03.877146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.877165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.877360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.877380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.877576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.877594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.877805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.877823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.877949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.877966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.878168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.878186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.878388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.878407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.878527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.878545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.878798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.878817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.878957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.878977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.879094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.879111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.879245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.879264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.879524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.879543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.879645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.879664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.879786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.879803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.879953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.879972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.880095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.880114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.880310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.880328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.880543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.880561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.880752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.880771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.880902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.880920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.881127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.881145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.881335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.881354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.881540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.881559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.881755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.881777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.881927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.881945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.882118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.882136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.882353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.882371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.882566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.882585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.882723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.882742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.882862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.882880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.883083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.883101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.883306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.883324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.883469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.883488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.202 [2024-07-24 19:07:03.883613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.202 [2024-07-24 19:07:03.883631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.202 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.883768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.883787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.883925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.883944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.884201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.884219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.884354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.884372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.884500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.884519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.884749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.884767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.884896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.884914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.885042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.885060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.885247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.885265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.885389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.885407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.885616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.885635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.885750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.885769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.885890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.885908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.886055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.886073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.886197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.886215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.886349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.886367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.886557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.886576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.886810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.886828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.887039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.887058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.887174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.887193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.887317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.887335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.887475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.887492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.887643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.887662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.887849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.887868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.888065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.888083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.888285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.888303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.888446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.888465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.888580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.888598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.888713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.888732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.888864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.888882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.889013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.889032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.889258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.889277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.889461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.889479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.889646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.889665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.889859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.203 [2024-07-24 19:07:03.889878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.203 qpair failed and we were unable to recover it. 00:30:19.203 [2024-07-24 19:07:03.890008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.890026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.890149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.890167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.890357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.890375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.890562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.890580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.890727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.890745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.890946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.890965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.891082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.891100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.891291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.891308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.891457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.891476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.891658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.891676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.891798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.891816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.891932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.891950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.892070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.892088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.892275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.892293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.892392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.892411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.892560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.892578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.892705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.892723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.892985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.893003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.893200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.893221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.893331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.893350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.893467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.893485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.893679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.893702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.893816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.893834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.894060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.894078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.894271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.894289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.894433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.894451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.894591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.894617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.894768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.894786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.895039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.895057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.895190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.895209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.895335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.895353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.895536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.895555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.895818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.895837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.896028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.896046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.896160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.896178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.896384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.896404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.896598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.204 [2024-07-24 19:07:03.896635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.204 qpair failed and we were unable to recover it. 00:30:19.204 [2024-07-24 19:07:03.896835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.896853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.897128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.897146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.897434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.897452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.897581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.897599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.897776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.897795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.897913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.897931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.898129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.898147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.898271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.898290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.898405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.898423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.898545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.898564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.898759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.898777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.898985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.899003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.899130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.899148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.899282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.899300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.899527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.899545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.899672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.899690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.899820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.899839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.900066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.900084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.900206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.900224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.900341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.900360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.900558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.900576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.900839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.900858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.901096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.901114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.901306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.901324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.901522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.901544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.901775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.901794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.901992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.902010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.902272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.205 [2024-07-24 19:07:03.902291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.205 qpair failed and we were unable to recover it. 00:30:19.205 [2024-07-24 19:07:03.902424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.902442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.902577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.902596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.902731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.902749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.902942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.902960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.903150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.903169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.903281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.903299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.903485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.903504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.903629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.903647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.903755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.903773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.903966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.903985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.904107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.904126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.904261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.904279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.904518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.904537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.904667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.904685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.904832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.904850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.905058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.905077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.905263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.905281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.905402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.905420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.905533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.905551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.905671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.905689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.905838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.905856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.906041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.906060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.906174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.906192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.906315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.906334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.906474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.906492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.906618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.906637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.906876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.906893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.907031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.907049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.907181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.907200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.907324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.907342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.907483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.907501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.907633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.907652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.907790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.907808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.907994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.908012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.908139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.908158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.908419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.908437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.908647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.908670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.206 qpair failed and we were unable to recover it. 00:30:19.206 [2024-07-24 19:07:03.908934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.206 [2024-07-24 19:07:03.908952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.909154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.909172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.909299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.909317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.909442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.909460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.909587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.909624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.909846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.909866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.910005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.910023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.910206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.910224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.910467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.910485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.910616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.910634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.910827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.910845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.911048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.911067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.911196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.911214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.911440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.911459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.911600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.911642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.911929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.911948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.912100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.912119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.912330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.912348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.912537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.912555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.912830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.912849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.913136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.913154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.913369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.913387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.913594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.913617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.913818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.913836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.913968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.913988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.914123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.914142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.914422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.914441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.914647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.914666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.914854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.914872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.915057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.915075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.915190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.915208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.915413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.915431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.915621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.915640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.915761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.915779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.915969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.915987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.916111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.916130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.916440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.916458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.916695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.207 [2024-07-24 19:07:03.916715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.207 qpair failed and we were unable to recover it. 00:30:19.207 [2024-07-24 19:07:03.916844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.916862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.917058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.917079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.917271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.917289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.917541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.917559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.917703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.917721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.917917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.917935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.918056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.918074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.918197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.918215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.918337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.918355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.918496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.918514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.918619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.918638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.918843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.918862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.919117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.919134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.919250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.919267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.919459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.919477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.919672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.919693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.919928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.919946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.920072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.920091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.920289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.920307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.920452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.920470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.920657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.920676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.920865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.920884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.921076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.921093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.921296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.921315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.921448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.921466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.921594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.921619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.921809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.921827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.921973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.921991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.922180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.922198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.922430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.922449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.922635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.922654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.922857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.922875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.923133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.923151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.923284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.923302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.923496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.923514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.923715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.923735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.924056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.208 [2024-07-24 19:07:03.924074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.208 qpair failed and we were unable to recover it. 00:30:19.208 [2024-07-24 19:07:03.924290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.924308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.924569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.924588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.924796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.924815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.925010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.925029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.925247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.925269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.925491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.925509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.925653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.925673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.925771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.925791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.925906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.925924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.926056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.926075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.926300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.926319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.926435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.926453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.926587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.926621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.926761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.926780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.926902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.926920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.927118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.927136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.927274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.927293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.927419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.927436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.927574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.927592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.927817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.927836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.927975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.927994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.928125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.928144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.928293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.928312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.928515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.928534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.928745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.928765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.928911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.928930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.929186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.929205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.929400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.929418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.929620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.929639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.929850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.929870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.929966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.929984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.930188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.930207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.930346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.930365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.930508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.930527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.209 qpair failed and we were unable to recover it. 00:30:19.209 [2024-07-24 19:07:03.930725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.209 [2024-07-24 19:07:03.930745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.930941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.930959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.931159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.931177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.931380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.931398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.931589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.931613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.931731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.931750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.931880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.931899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.932123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.932141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.932275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.932293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.932553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.932572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.932720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.932742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.932944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.932962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.933078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.933097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.933236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.933254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.933374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.933392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.933599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.933626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.933837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.933856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.933971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.933990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.934123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.934142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.934255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.934273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.934519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.934538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.934730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.934750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.934877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.934895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.935121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.935139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.935277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.935295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.935417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.935436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.935535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.935553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.935741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.935761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.935890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.935909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.936109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.936128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.936252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.210 [2024-07-24 19:07:03.936271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.210 qpair failed and we were unable to recover it. 00:30:19.210 [2024-07-24 19:07:03.936503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.936521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.936620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.936638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.936766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.936784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.937054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.937072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.937216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.937235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.937452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.937470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.937752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.937771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.937901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.937919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.938107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.938126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.938322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.938340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.938534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.938553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.938693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.938713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.938920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.938938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.939058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.939076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.939214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.939232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.939431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.939450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.939635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.939654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.939861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.939879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.939995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.940013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.940150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.940172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.940296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.940314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.940513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.940531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.940718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.940737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.940955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.940974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.941172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.941191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.941319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.941337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.941473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.941491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.941614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.941633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.941835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.941853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.941974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.941993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.942134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.942153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.942278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.942296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.211 [2024-07-24 19:07:03.942411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.211 [2024-07-24 19:07:03.942430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.211 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.942624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.942644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.942782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.942800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.943036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.943055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.943242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.943263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.943379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.943397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.943533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.943552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.943749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.943768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.943909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.943927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.944048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.944067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.944205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.944223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.944413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.944432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.944583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.944610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.944742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.944761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.944892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.944910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.945075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.945094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.945282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.945300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.945430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.945448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.945575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.945593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.945737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.945756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.945941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.945960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.946131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.946150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.946352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.946371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.946480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.946498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.946722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.946741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.946856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.946875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.947006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.947025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.947208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.947230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.947421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.947440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.947558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.947576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.947810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.947829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.948002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.948021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.948219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.948238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.948333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.948352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.948465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.948484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.948630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.948649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.948774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.948793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.948982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.212 [2024-07-24 19:07:03.949000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.212 qpair failed and we were unable to recover it. 00:30:19.212 [2024-07-24 19:07:03.949128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.949146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.949273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.949292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.949488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.949506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.949648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.949668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.950007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.950026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.950142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.950161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.950410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.950429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.950652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.950672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.950796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.950814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.950941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.950959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.951147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.951166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.951285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.951303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.951428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.951447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.951635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.951655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.951761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.951780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.951912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.951930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.952071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.952091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.952346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.952365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.952570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.952589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.952742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.952761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.952901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.952919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.953105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.953123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.953261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.953279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.953470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.953488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.953627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.953646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.953781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.953800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.953919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.953937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.954172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.954191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.954320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.954338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.954543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.954564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.954682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.954701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.954903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.954921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.955113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.955131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.955257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.955276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.955470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.955488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.955630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.213 [2024-07-24 19:07:03.955649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.213 qpair failed and we were unable to recover it. 00:30:19.213 [2024-07-24 19:07:03.955837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.955855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.955978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.955997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.956271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.956290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.956444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.956462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.956665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.956685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.956800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.956818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.956948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.956966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.957185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.957204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.957338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.957357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.957553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.957571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.957774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.957794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.957916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.957934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.958100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.958118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.958252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.958271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.958471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.958490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.958675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.958694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.958884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.958903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.959035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.959054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.959176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.959195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.959326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.959344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.959477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.959496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.959685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.959717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.959844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.959862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.959986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.960005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.960145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.960164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.960303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.960322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.960534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.960553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.960685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.960705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.960842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.960860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.961114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.961132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.961248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.961267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.961463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.961482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.961656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.961675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.961802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.961825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.962026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.214 [2024-07-24 19:07:03.962045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.214 qpair failed and we were unable to recover it. 00:30:19.214 [2024-07-24 19:07:03.962165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.962183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.962304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.962323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.962522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.962541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.962776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.962795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.963048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.963066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.963255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.963274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.963408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.963427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.963558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.963577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.963732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.963752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.963888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.963907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.964033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.964052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.964244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.964263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.964575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.964594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.964724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.964742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.964940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.964959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.965152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.965170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.965366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.965385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.965590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.965617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.965818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.965837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.965968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.965986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.966174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.966193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.966314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.966333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.966563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.966581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.966742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.966760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.966858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.966876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.967073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.967091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.967305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.967324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.967476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.967494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.967640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.967659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.215 qpair failed and we were unable to recover it. 00:30:19.215 [2024-07-24 19:07:03.967850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.215 [2024-07-24 19:07:03.967869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.968055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.968074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.968313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.968332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.968466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.968486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.968671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.968690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.968817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.968835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.968972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.968990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.969196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.969214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.969339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.969358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.969492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.969513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.969718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.969737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.969933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.969952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.970089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.970108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.970290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.970309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.970567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.970585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.970782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.970801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.970927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.970946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.971106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.971125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.971315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.971334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.971452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.971470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.971590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.971630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.971799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.971818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.972022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.972040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.972175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.972194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.972334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.972353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.972567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.972586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.972788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.972806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.972929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.972948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.973073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.973092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.973358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.973377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.973565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.973584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.973728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.973747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.973950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.973969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.974162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.974181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.974373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.974391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.974591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.974618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.216 [2024-07-24 19:07:03.974813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.216 [2024-07-24 19:07:03.974831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.216 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.974983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.975002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.975125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.975144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.975349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.975368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.975560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.975579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.975696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.975715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.975847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.975867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.975990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.976008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.976265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.976282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.976419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.976438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.976560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.976578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.976880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.976900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.977099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.977119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.977239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.977261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.977390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.977408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.977546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.977564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.977691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.977711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.977899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.977918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.978056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.978074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.978265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.978284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.978501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.978520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.978656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.978675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.978876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.978894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.979024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.979042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.979231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.979250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.979381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.979399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.979615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.979635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.979829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.979848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.980052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.980070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.980200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.980218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.980428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.980446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.980650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.980670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.980975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.980993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.981120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.981138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.981255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.981273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.981412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.981430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.217 qpair failed and we were unable to recover it. 00:30:19.217 [2024-07-24 19:07:03.981562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.217 [2024-07-24 19:07:03.981581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.981777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.981795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.981977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.981995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.982190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.982210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.982335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.982353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.982536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.982555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.982675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.982694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.982821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.982839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.983023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.983041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.983146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.983165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.983282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.983301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.983453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.983471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.983684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.983703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.983831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.983850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.983966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.983984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.984115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.984134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.984318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.984336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.984547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.984566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.984723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.984742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.984929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.984947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.985068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.985087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.985222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.985240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.985374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.985392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.985536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.985554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.985748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.985768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.985901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.985920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.986053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.986072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.986196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.986214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.986331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.986350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.986617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.986636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.986753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.986771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.986912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.986932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.987120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.987138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.987278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.987298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.987413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.987431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.987554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.987573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.218 qpair failed and we were unable to recover it. 00:30:19.218 [2024-07-24 19:07:03.987716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.218 [2024-07-24 19:07:03.987736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.987973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.987992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.988130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.988148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.988270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.988288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.988483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.988501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.988696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.988715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.988923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.988942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.989058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.989077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.989344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.989366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.989470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.989489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.989675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.989694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.989821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.989839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.989989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.990008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.990223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.990241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.990423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.990442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.990640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.990659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.990784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.990803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.990989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.991007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.991204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.991222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.991338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.991356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.991553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.991572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.991789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.991808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.991996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.992015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.992121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.992140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.992261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.992280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.992462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.992481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.992669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.992688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.992810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.992829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.993083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.993103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.993225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.993249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.993449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.993468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.993658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.993677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.993807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.993825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.993973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.993992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.994182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.994200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.994399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.994417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.994628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.994648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.219 [2024-07-24 19:07:03.994900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.219 [2024-07-24 19:07:03.994919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.219 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.995038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.995055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.995288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.995307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.995558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.995576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.995768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.995787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.995976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.995995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.996269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.996287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.996501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.996519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.996739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.996759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.996969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.996988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.997189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.997208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.997357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.997379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.997567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.997586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.997833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.997887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.998110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.998143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.998317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.998348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fda0 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.998642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.998662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.998923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.998942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.999075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.999094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.999276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.999295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.999430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.999448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.999662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.999681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:03.999867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:03.999887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:04.000172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:04.000191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:04.000383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:04.000401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:04.000594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:04.000622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:04.000885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:04.000904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:04.001120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:04.001138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:04.001391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:04.001410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:04.001639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:04.001658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:04.001786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:04.001805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:04.002060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.220 [2024-07-24 19:07:04.002078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.220 qpair failed and we were unable to recover it. 00:30:19.220 [2024-07-24 19:07:04.002336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.002354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.002577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.002595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.002736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.002755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.002951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.002970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.003099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.003117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.003313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.003332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.003597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.003624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.003755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.003773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.003907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.003926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.004126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.004145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.004365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.004383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.004496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.004514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.004703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.004722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.004918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.004936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.005051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.005069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.005190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.005209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.005327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.005346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.005553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.005572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.005708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.005728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.005928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.005950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.006183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.006202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.006336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.006355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.006672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.006691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.006879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.006898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.007085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.007104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.007269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.007288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.007429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.007448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.007763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.007782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.007967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.007986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.008174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.008193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.008450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.008469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.008684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.008703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.008902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.008920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.009113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.009132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.221 [2024-07-24 19:07:04.009334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.221 [2024-07-24 19:07:04.009353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.221 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.009656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.009675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.009879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.009898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.010095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.010113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.010262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.010280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.010396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.010415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.010546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.010565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.010695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.010714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.010920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.010939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.011059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.011077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.011211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.011228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.011418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.011436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.011760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.011778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.011979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.011998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.012206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.012224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.012345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.012363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.012546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.012564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.012708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.012728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.012926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.012944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.013148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.013166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.013355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.013373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.013500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.013518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.013641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.013659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.013791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.013810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.014028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.014046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.014165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.014186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.014469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.014487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.014794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.014812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.014928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.014946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.015229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.015247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.015481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.015499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.015733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.015751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.015976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.015995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.016105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.016123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.016267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.016286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.016434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.016452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.222 [2024-07-24 19:07:04.016713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.222 [2024-07-24 19:07:04.016732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.222 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.016917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.016935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.017138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.017157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.017374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.017393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.017526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.017544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.017801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.017820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.017965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.017983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.018264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.018283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.018571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.018590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.018801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.018819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.018932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.018950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.019098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.019116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.019395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.019413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.019613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.019631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.019818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.019836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.020033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.020052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.020209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.020227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.020414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.020432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.020701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.020720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.020950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.020968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.021255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.021273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.021491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.021509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.021813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.021831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.022089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.022108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.022247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.022265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.022465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.022483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.022672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.022691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.022829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.022848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.023080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.023098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.023297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.023318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.023514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.023532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.023748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.223 [2024-07-24 19:07:04.023767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.223 qpair failed and we were unable to recover it. 00:30:19.223 [2024-07-24 19:07:04.023914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.023932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.024071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.024089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.024313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.024331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.024574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.024593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.024822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.024841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.025124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.025142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.025367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.025386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.025667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.025686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.025828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.025846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.026099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.026117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.026302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.026319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.026454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.026472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.026712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.026730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.026880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.026898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.027102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.027120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.027330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.027348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.027551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.027568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.027854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.027872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.028007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.028025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.028138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.028156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.028355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.028373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.028572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.028591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.028798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.028816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.029040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.029058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.029232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.029250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.029434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.029453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.029652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.029671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.029940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.029959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.030230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.030248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.030382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.030400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.030694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.030713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.030940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.030958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.031224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.031243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.031429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.031447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.031664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.031683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.031883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.031901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.032120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.224 [2024-07-24 19:07:04.032138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.224 qpair failed and we were unable to recover it. 00:30:19.224 [2024-07-24 19:07:04.032398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.032419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.032588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.032611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.032809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.032828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.033045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.033062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.033340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.033357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.033563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.033581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.033796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.033814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.034119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.034137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.034373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.034392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.034590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.034615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.034871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.034889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.035090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.035108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.035254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.035271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.035471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.035490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.035776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.035795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.035996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.036014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.036143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.036161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.036365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.036383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.036519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.036536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.036664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.036682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.036910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.036928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.037060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.037078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.037314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.037332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.037529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.037548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.037696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.037715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.037852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.037870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.038178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.038197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.038384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.038402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.038674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.038693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.038919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.225 [2024-07-24 19:07:04.038938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.225 qpair failed and we were unable to recover it. 00:30:19.225 [2024-07-24 19:07:04.039142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.039160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.039271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.039288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.039571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.039591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.039857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.039876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.040184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.040202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.040480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.040497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.040689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.040708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.040921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.040939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.041139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.041157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.226 [2024-07-24 19:07:04.041357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.041376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:19.226 [2024-07-24 19:07:04.041663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.041683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.041911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.041929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:19.226 [2024-07-24 19:07:04.042135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.042154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:19.226 [2024-07-24 19:07:04.042250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.042269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.042472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.042490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.042712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.042730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.043037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.043055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.043336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.043357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.043544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.043563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.043796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.043815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.044016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.044035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.044321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.044340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.044546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.044565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.044681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.044699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.044913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.044932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.045133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.045151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.045368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.045387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.045642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.045662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.045854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.045872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.046094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.046113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.046318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.046337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.046593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.046618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.046749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.226 [2024-07-24 19:07:04.046768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.226 qpair failed and we were unable to recover it. 00:30:19.226 [2024-07-24 19:07:04.046984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.047002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.047198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.047216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.047460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.047479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.047686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.047705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.047894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.047912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.048167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.048186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.048385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.048404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.048704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.048723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.049006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.049024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.049241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.049259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.049567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.049585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.049901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.049962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5d8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.050249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.050314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.050643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.050679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.050896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.050916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.051153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.051174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.051429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.051447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.051641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.051659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.051795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.051814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.051947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.051966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.052202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.052221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.052408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.052427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.052574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.052592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.052725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.052743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.052893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.052912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.053049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.053067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.053257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.053276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.053464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.053484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.053591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.053623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.053781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.053800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.053982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.054002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.054189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.054207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.054403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.054421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.054626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.054645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.054972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.054991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.055213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.055231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.055458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.055477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.227 [2024-07-24 19:07:04.055679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.227 [2024-07-24 19:07:04.055697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.227 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.055931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.055949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.056160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.056179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.056379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.056398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.056664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.056684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.056814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.056833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.057052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.057070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.057293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.057312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.057414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.057432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.057686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.057706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.058021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.058039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.058243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.058262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.058450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.058468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.058674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.058693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.058831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.058849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.059049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.059068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.059189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.059207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.059347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.059366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.059487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.059509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.059698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.059718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.059847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.059866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.060066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.060084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.060343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.060362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.060499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.060517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.060722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.060741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.060929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.060948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.061100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.061120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.061402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.061421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.061619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.061638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.061762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.061780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.061969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.061988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.062123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.062141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.062422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.062441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.062629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.062648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.062772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.062790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.063003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.063022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.228 qpair failed and we were unable to recover it. 00:30:19.228 [2024-07-24 19:07:04.063276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.228 [2024-07-24 19:07:04.063294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.063424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.063442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.063630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.063649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.063818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.063837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.063971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.063988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.064186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.064205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.064395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.064413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.064618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.064637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.064768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.064787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.064956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.064975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.065185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.065203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.065424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.065443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.065600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.065625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.065795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.065813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.065959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.065979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.066112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.066130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.066276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.066294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.066430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.066448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.066589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.066617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.066755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.066773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.066905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.066924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.067064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.067082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.067216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.067238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.067397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.067416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.067730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.067749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.067890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.067908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.068048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.068067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.068181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.068199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.068405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.068423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.068636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.068656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.068775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.068793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.068933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.068951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.069081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.069099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.069373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.069392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.069510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.229 [2024-07-24 19:07:04.069528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.229 qpair failed and we were unable to recover it. 00:30:19.229 [2024-07-24 19:07:04.069726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.069745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.069935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.069954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.070164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.070182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.070296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.070314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.070502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.070520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.070725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.070743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.070944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.070963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.071156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.071174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.071373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.071391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.071525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.071543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.071733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.071751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.071897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.071916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.072064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.072083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.072210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.072228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.072364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.072382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.072536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.072555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.072764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.072783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.072919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.072937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.073062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.073081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.073280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.073298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.073419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.073439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.073573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.073593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.073739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.073758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.073897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.073917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.074048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.074067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.074206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.074224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.074422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.074440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.074576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.074599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.074801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.074819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.075028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.075047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.075244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.075263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.075515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.075533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.230 [2024-07-24 19:07:04.075656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.230 [2024-07-24 19:07:04.075674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.230 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.075865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.075884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.076077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.076096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.076240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.076258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.076490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.076509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.076695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.076713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.076931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.076950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.231 [2024-07-24 19:07:04.077139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.077159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.077295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.077313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:19.231 [2024-07-24 19:07:04.077411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.077432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.077564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.077583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.231 [2024-07-24 19:07:04.077795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.077815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.077948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.077967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.231 [2024-07-24 19:07:04.078089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.078109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.078302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.078321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.078433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.078451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.078653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.078672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.078815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.078833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.079026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.079044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.079317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.079336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.079453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.079471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.079676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.079694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.079963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.079982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.080095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.080112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.080310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.080328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.080460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.080478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.080661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.080680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.080879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.080898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.081047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.081066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.081263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.081282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.081479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.081497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.081628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.081647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.081763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.081781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.081988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.082009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.082216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.082235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.231 [2024-07-24 19:07:04.082539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.231 [2024-07-24 19:07:04.082557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.231 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.082696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.082714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.082955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.082974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.083180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.083197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.083317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.083335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.083465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.083483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.083598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.083626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.083813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.083831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.084021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.084040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.084165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.084183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.084296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.084314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.084444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.084462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.084641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.084660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.084849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.084867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.085064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.085083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.085273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.085291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.085427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.085445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.085660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.085678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.085872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.085890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.086215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.086234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.086506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.086525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.086777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.086796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.086982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.087001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.087187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.087205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.087437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.087455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.087665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.087702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.087927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.087958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.088114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.088144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e0000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.088350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.088370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.088495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.088514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.088789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.088808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.089013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.089031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.089170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.089188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.089371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.089389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.089526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.089543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.089675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.089694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.089878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.089896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.090091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.232 [2024-07-24 19:07:04.090109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.232 qpair failed and we were unable to recover it. 00:30:19.232 [2024-07-24 19:07:04.090333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.090355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.090499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.090517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.090705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.090724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.090850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.090868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.091123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.091141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.091327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.091345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.091534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.091553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.091693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.091711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.091993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.092011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.092216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.092235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.092379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.092397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.092652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.092671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.092930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.092949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.093097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.093115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.093375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.093396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.093538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.093556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.093752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.093772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.093979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.093997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.094186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.094204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.094516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.094535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.094666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.094685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.094900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.094919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.095104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.095121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.095316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.095335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.095483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.095502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.095653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.095671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.095945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.095965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.096098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.096120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.096327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.096346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.096626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.096647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.096839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.096859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.097141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.097160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.097291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.097310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.097533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.097553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.097682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.233 [2024-07-24 19:07:04.097702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-07-24 19:07:04.097932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.097951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.098092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.098112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.098366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.098384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.098571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.098591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.098868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.098887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.099076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.099094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.099280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.099299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.099583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.099609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.099725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.099743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.099876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.099894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.100129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.100148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.100290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.100309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.100508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.100526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.100647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.100666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.100779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.100797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.100945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.100962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.101167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.101185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.101467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.101485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.101691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.101709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.101864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.101883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.102068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.102086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.102315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.102333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.102531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.102549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.102780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.102798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.102928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.102946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.103143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.103161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.103278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.103295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.103525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.103542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-07-24 19:07:04.103816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.234 [2024-07-24 19:07:04.103835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.104091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.104109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.104218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.104235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 Malloc0 00:30:19.235 [2024-07-24 19:07:04.104513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.104532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.104814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.104836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.105039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.105057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.235 [2024-07-24 19:07:04.105189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.105208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.105490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.105508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:19.235 [2024-07-24 19:07:04.105706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.105725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.235 [2024-07-24 19:07:04.105936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.105955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.235 [2024-07-24 19:07:04.106206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.106225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.106429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.106447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.106653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.106672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.106857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.106875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.107093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.107110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.107364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.107382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.107591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.107617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.107819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.107837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.108153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.108172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.108425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.108443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.108628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.108646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.108872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.108890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.109108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.109125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.109431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.109448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.109596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.109620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.109905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.109922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.110060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.110078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.110197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.110215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.110420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.110438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.110588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.110613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.110753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.110771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.111051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.111068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.111270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.111288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.111508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.111526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.111797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.235 [2024-07-24 19:07:04.111816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-07-24 19:07:04.112070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.112087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.112130] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.236 [2024-07-24 19:07:04.112288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.112307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.112440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.112456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.112661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.112680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.112889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.112907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.113098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.113116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.113232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.113250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.113451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.113470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.113657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.113675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.113892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.113910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.114125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.114144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.114240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.114258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.114495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.114514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.114714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.114732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.114923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.114941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.115142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.115160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.115429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.115448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.115671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.115689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.115997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.116014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.116299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.116317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.116587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.116626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.116812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.116830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.116971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.116989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.117193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.117212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.117398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.117417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.117619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.117638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.117871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.117889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.118089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.118108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.118335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.118354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.118478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.118496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.118697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.118715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.119002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.119020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.119223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.119241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.119495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.119513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.119794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.119813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.119951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.119969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.236 qpair failed and we were unable to recover it. 00:30:19.236 [2024-07-24 19:07:04.120182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.236 [2024-07-24 19:07:04.120200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.120466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.120485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.120669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.120687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.237 [2024-07-24 19:07:04.120952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.120971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:19.237 [2024-07-24 19:07:04.121279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.121298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.237 [2024-07-24 19:07:04.121559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.121578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.121785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.121804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.121952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.121971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.122120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.122138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.122416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.122437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.122692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.122711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.122905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.122923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.123127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.123144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.123437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.123455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.123627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.123645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.123857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.123875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.124087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.124105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.124320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.124337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.124526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.124544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.124839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.124858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.125141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.125159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.125449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.125467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.125625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.125644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.125794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.125813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.126033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.126051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.126152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.126170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.126425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.126443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.126583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.126601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.126895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.126914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.127112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.127130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.127269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.127288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.127491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.127509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.127653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.127671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.127956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.127974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.128174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.128193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.237 qpair failed and we were unable to recover it. 00:30:19.237 [2024-07-24 19:07:04.128477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.237 [2024-07-24 19:07:04.128495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.128705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.128724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.129014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.129032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.129237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.129255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.129509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.129527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.129724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.129743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.129875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.129893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.130089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.130107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.130384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.130402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.130668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.130686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.130831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.130849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.131108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.131126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.131254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.131272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.131396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.131413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.131582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.131613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.131732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.131750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.132005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.132023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.132281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.132300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.132482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.132499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.132641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.132660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.238 [2024-07-24 19:07:04.132944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.132963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.133157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.133175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:19.238 [2024-07-24 19:07:04.133320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.133338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.238 [2024-07-24 19:07:04.133595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.133621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.238 [2024-07-24 19:07:04.133908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.133926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.134163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.134181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.134486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.134504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.134756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.134775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.135020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.135038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.135226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.135244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.135426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.135444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.135727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.135745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.238 [2024-07-24 19:07:04.135930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.238 [2024-07-24 19:07:04.135949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.238 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.136234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.136251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.136450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.136468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.136721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.136740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.136992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.137009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.137294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.137311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.137445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.137464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.137752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.137770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.137961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.137980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.138271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.138290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.138546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.138563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.138775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.138793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.139010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.139028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.139159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.139178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.139459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.139477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.139677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.139695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.139896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.139914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.140122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.140140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.140344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.140363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.140551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.140568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.140775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.140798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.239 [2024-07-24 19:07:04.141084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.141103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.239 [2024-07-24 19:07:04.141359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.141378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.239 [2024-07-24 19:07:04.141579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.141598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.239 [2024-07-24 19:07:04.141865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.141884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.142018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.142036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.142179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.142197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.142403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.142421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-24 19:07:04.142673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.239 [2024-07-24 19:07:04.142692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.142880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.240 [2024-07-24 19:07:04.142897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.143034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.240 [2024-07-24 19:07:04.143052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.143247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.240 [2024-07-24 19:07:04.143265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.143405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.240 [2024-07-24 19:07:04.143425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.143695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.240 [2024-07-24 19:07:04.143714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.143849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.240 [2024-07-24 19:07:04.143867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.143994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.240 [2024-07-24 19:07:04.144011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.144125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.240 [2024-07-24 19:07:04.144144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.144422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.240 [2024-07-24 19:07:04.144440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5e8000b90 with addr=10.0.0.2, port=4420 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.144685] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.240 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.240 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.240 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.240 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.240 [2024-07-24 19:07:04.152990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-24 19:07:04.153142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-24 19:07:04.153174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-24 19:07:04.153188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-24 19:07:04.153201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.240 [2024-07-24 19:07:04.153235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.240 19:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2691290 00:30:19.240 [2024-07-24 19:07:04.162867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-24 19:07:04.163018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-24 19:07:04.163051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-24 19:07:04.163064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-24 19:07:04.163075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.240 [2024-07-24 19:07:04.163103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.172923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-24 19:07:04.173072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-24 19:07:04.173100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-24 19:07:04.173113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-24 19:07:04.173124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.240 [2024-07-24 19:07:04.173151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-24 19:07:04.183118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-24 19:07:04.183270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-24 19:07:04.183298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-24 19:07:04.183310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-24 19:07:04.183321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.240 [2024-07-24 19:07:04.183347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.192867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.193016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.193043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.193056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.193067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.193093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.202930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.203096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.203124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.203137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.203148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.203180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.212983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.213098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.213125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.213138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.213149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.213176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.223360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.223504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.223532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.223545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.223556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.223584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.233034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.233158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.233184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.233196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.233208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.233233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.243109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.243232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.243258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.243270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.243282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.243307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.253111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.253231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.253263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.253275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.253287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.253314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.263264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.263411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.263437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.263449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.263460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.263486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.273174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.273334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.273361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.273374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.273385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.273411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.283226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.283331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.283356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.283368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.283380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.283406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.293249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.293395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.293421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.293434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.293445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.293476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.303486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.303635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-24 19:07:04.303662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-24 19:07:04.303675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-24 19:07:04.303686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.501 [2024-07-24 19:07:04.303713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-24 19:07:04.313233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-24 19:07:04.313399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.313426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.313440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.502 [2024-07-24 19:07:04.313450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.502 [2024-07-24 19:07:04.313477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 19:07:04.323356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.502 [2024-07-24 19:07:04.323465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.323491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.323504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.502 [2024-07-24 19:07:04.323515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.502 [2024-07-24 19:07:04.323540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 19:07:04.333405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.502 [2024-07-24 19:07:04.333520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.333550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.333563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.502 [2024-07-24 19:07:04.333574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.502 [2024-07-24 19:07:04.333600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 19:07:04.343615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.502 [2024-07-24 19:07:04.343784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.343813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.343826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.502 [2024-07-24 19:07:04.343837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.502 [2024-07-24 19:07:04.343865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 19:07:04.353971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.502 [2024-07-24 19:07:04.354133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.354160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.354173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.502 [2024-07-24 19:07:04.354184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.502 [2024-07-24 19:07:04.354211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 19:07:04.363613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.502 [2024-07-24 19:07:04.363727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.363753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.363766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.502 [2024-07-24 19:07:04.363777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.502 [2024-07-24 19:07:04.363803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 19:07:04.373661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.502 [2024-07-24 19:07:04.373767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.373792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.373805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.502 [2024-07-24 19:07:04.373816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.502 [2024-07-24 19:07:04.373842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 19:07:04.383908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.502 [2024-07-24 19:07:04.384048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.384074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.384087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.502 [2024-07-24 19:07:04.384103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.502 [2024-07-24 19:07:04.384129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 19:07:04.393665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.502 [2024-07-24 19:07:04.393795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.393827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.393842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.502 [2024-07-24 19:07:04.393853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.502 [2024-07-24 19:07:04.393880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 19:07:04.403675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.502 [2024-07-24 19:07:04.403791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.403818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.403831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.502 [2024-07-24 19:07:04.403843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.502 [2024-07-24 19:07:04.403869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.502 qpair failed and we were unable to recover it. 00:30:19.502 [2024-07-24 19:07:04.413727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.502 [2024-07-24 19:07:04.413832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.502 [2024-07-24 19:07:04.413858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.502 [2024-07-24 19:07:04.413870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.503 [2024-07-24 19:07:04.413881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.503 [2024-07-24 19:07:04.413907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 19:07:04.423983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.503 [2024-07-24 19:07:04.424126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.503 [2024-07-24 19:07:04.424152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.503 [2024-07-24 19:07:04.424165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.503 [2024-07-24 19:07:04.424176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.503 [2024-07-24 19:07:04.424203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 19:07:04.433836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.503 [2024-07-24 19:07:04.433963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.503 [2024-07-24 19:07:04.433989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.503 [2024-07-24 19:07:04.434002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.503 [2024-07-24 19:07:04.434014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.503 [2024-07-24 19:07:04.434039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 19:07:04.443809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.503 [2024-07-24 19:07:04.443933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.503 [2024-07-24 19:07:04.443960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.503 [2024-07-24 19:07:04.443973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.503 [2024-07-24 19:07:04.443984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.503 [2024-07-24 19:07:04.444011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 19:07:04.453953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.503 [2024-07-24 19:07:04.454068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.503 [2024-07-24 19:07:04.454098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.503 [2024-07-24 19:07:04.454110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.503 [2024-07-24 19:07:04.454121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.503 [2024-07-24 19:07:04.454147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 19:07:04.464061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.503 [2024-07-24 19:07:04.464241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.503 [2024-07-24 19:07:04.464267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.503 [2024-07-24 19:07:04.464280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.503 [2024-07-24 19:07:04.464291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.503 [2024-07-24 19:07:04.464318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 19:07:04.473914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.503 [2024-07-24 19:07:04.474035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.503 [2024-07-24 19:07:04.474061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.503 [2024-07-24 19:07:04.474079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.503 [2024-07-24 19:07:04.474091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.503 [2024-07-24 19:07:04.474116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 19:07:04.483945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.503 [2024-07-24 19:07:04.484093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.503 [2024-07-24 19:07:04.484119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.503 [2024-07-24 19:07:04.484131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.503 [2024-07-24 19:07:04.484142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.503 [2024-07-24 19:07:04.484168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 19:07:04.493958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.503 [2024-07-24 19:07:04.494072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.503 [2024-07-24 19:07:04.494104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.503 [2024-07-24 19:07:04.494117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.503 [2024-07-24 19:07:04.494128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.503 [2024-07-24 19:07:04.494154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.503 [2024-07-24 19:07:04.504190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.503 [2024-07-24 19:07:04.504352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.503 [2024-07-24 19:07:04.504378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.503 [2024-07-24 19:07:04.504390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.503 [2024-07-24 19:07:04.504401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.503 [2024-07-24 19:07:04.504427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.503 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.513930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.514045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.764 [2024-07-24 19:07:04.514070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.764 [2024-07-24 19:07:04.514082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.764 [2024-07-24 19:07:04.514094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.764 [2024-07-24 19:07:04.514119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.764 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.524065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.524194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.764 [2024-07-24 19:07:04.524221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.764 [2024-07-24 19:07:04.524234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.764 [2024-07-24 19:07:04.524245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.764 [2024-07-24 19:07:04.524270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.764 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.534075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.534192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.764 [2024-07-24 19:07:04.534218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.764 [2024-07-24 19:07:04.534230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.764 [2024-07-24 19:07:04.534242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.764 [2024-07-24 19:07:04.534267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.764 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.544324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.544463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.764 [2024-07-24 19:07:04.544490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.764 [2024-07-24 19:07:04.544502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.764 [2024-07-24 19:07:04.544513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.764 [2024-07-24 19:07:04.544538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.764 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.554193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.554306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.764 [2024-07-24 19:07:04.554330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.764 [2024-07-24 19:07:04.554342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.764 [2024-07-24 19:07:04.554353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.764 [2024-07-24 19:07:04.554379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.764 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.564170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.564312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.764 [2024-07-24 19:07:04.564337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.764 [2024-07-24 19:07:04.564354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.764 [2024-07-24 19:07:04.564365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.764 [2024-07-24 19:07:04.564392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.764 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.574245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.574398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.764 [2024-07-24 19:07:04.574424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.764 [2024-07-24 19:07:04.574436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.764 [2024-07-24 19:07:04.574448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.764 [2024-07-24 19:07:04.574474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.764 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.584420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.584556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.764 [2024-07-24 19:07:04.584582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.764 [2024-07-24 19:07:04.584594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.764 [2024-07-24 19:07:04.584613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.764 [2024-07-24 19:07:04.584639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.764 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.594275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.594396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.764 [2024-07-24 19:07:04.594430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.764 [2024-07-24 19:07:04.594443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.764 [2024-07-24 19:07:04.594454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.764 [2024-07-24 19:07:04.594480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.764 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.604337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.604459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.764 [2024-07-24 19:07:04.604486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.764 [2024-07-24 19:07:04.604499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.764 [2024-07-24 19:07:04.604510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.764 [2024-07-24 19:07:04.604536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.764 qpair failed and we were unable to recover it. 00:30:19.764 [2024-07-24 19:07:04.614349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.764 [2024-07-24 19:07:04.614498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.614524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.614537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.614549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.614575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.624531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.624682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.624709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.624722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.624734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.624761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.634410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.634552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.634577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.634590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.634608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.634636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.644451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.644637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.644664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.644676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.644687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.644714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.654467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.654609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.654641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.654654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.654665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.654693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.664699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.664853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.664880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.664892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.664904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.664929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.674552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.674720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.674746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.674758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.674769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.674795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.684529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.684656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.684683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.684696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.684707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.684732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.694566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.694686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.694713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.694725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.694737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.694767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.704907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.705053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.705079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.705091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.705102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.705127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.714685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.714824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.714850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.714862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.714874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.714900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.724622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.724735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.724759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.724773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.724784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.724809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.765 [2024-07-24 19:07:04.734731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.765 [2024-07-24 19:07:04.734869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.765 [2024-07-24 19:07:04.734895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.765 [2024-07-24 19:07:04.734908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.765 [2024-07-24 19:07:04.734919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.765 [2024-07-24 19:07:04.734944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.765 qpair failed and we were unable to recover it. 00:30:19.766 [2024-07-24 19:07:04.744984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.766 [2024-07-24 19:07:04.745130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.766 [2024-07-24 19:07:04.745161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.766 [2024-07-24 19:07:04.745174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.766 [2024-07-24 19:07:04.745185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.766 [2024-07-24 19:07:04.745211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.766 qpair failed and we were unable to recover it. 00:30:19.766 [2024-07-24 19:07:04.754797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.766 [2024-07-24 19:07:04.754916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.766 [2024-07-24 19:07:04.754942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.766 [2024-07-24 19:07:04.754955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.766 [2024-07-24 19:07:04.754965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.766 [2024-07-24 19:07:04.754991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.766 qpair failed and we were unable to recover it. 00:30:19.766 [2024-07-24 19:07:04.764837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.766 [2024-07-24 19:07:04.764945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.766 [2024-07-24 19:07:04.764970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.766 [2024-07-24 19:07:04.764982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.766 [2024-07-24 19:07:04.764993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:19.766 [2024-07-24 19:07:04.765019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.766 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.774881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.774992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.775023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.775035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.775047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.775072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.785104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.785245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.785271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.785284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.785300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.785326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.794988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.795147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.795174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.795186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.795198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.795225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.805000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.805116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.805147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.805159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.805170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.805196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.814944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.815057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.815088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.815101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.815114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.815139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.825257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.825432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.825458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.825471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.825482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.825508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.835104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.835229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.835254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.835267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.835279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.835304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.845168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.845276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.845303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.845316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.845328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.845354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.855151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.855276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.855303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.855316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.855327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.855354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.865376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.865518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.865543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.865556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.865567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.865593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.875221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.875370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.875397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.875419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.875431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.875457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.885293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.885409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.885435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.885447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-07-24 19:07:04.885459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.026 [2024-07-24 19:07:04.885485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-07-24 19:07:04.895307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-07-24 19:07:04.895415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-07-24 19:07:04.895441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-07-24 19:07:04.895454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.895465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.895492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:04.905521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:04.905694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:04.905721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:04.905734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.905745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.905771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:04.915353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:04.915474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:04.915500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:04.915512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.915524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.915549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:04.925363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:04.925475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:04.925507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:04.925520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.925532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.925557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:04.935432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:04.935542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:04.935566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:04.935579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.935590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.935639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:04.945682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:04.945872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:04.945899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:04.945911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.945922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.945949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:04.955520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:04.955641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:04.955666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:04.955678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.955690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.955715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:04.965565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:04.965683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:04.965710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:04.965727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.965738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.965765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:04.975635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:04.975739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:04.975764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:04.975776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.975787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.975814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:04.985752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:04.985890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:04.985916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:04.985929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.985940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.985965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:04.995638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:04.995756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:04.995781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:04.995794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:04.995805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:04.995832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:05.005687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:05.005791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:05.005817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:05.005829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:05.005841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:05.005868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:05.015645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:05.015761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:05.015787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:05.015800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:05.015811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.027 [2024-07-24 19:07:05.015837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-07-24 19:07:05.025918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-07-24 19:07:05.026110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-07-24 19:07:05.026136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-07-24 19:07:05.026148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-07-24 19:07:05.026159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.028 [2024-07-24 19:07:05.026185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.028 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.035774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.035891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.035916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.035929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.035939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.035965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.045846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.045955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.045981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.045994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.046004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.046029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.055820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.055927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.055956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.055969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.055980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.056005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.065998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.066142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.066168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.066181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.066192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.066218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.075972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.076099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.076125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.076137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.076149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.076175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.085946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.086092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.086117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.086130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.086141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.086167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.096024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.096136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.096169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.096182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.096193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.096226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.106192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.106334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.106360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.106373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.106384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.106411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.115991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.116121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.116146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.116157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.116168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.116194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.126028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.126142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.126173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.126186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.126197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.126224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.136075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.136221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.136247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.136260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.136271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.136298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.146286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.146453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.146485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.146497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.146509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.288 [2024-07-24 19:07:05.146535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.288 qpair failed and we were unable to recover it. 00:30:20.288 [2024-07-24 19:07:05.156161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.288 [2024-07-24 19:07:05.156286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.288 [2024-07-24 19:07:05.156313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.288 [2024-07-24 19:07:05.156325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.288 [2024-07-24 19:07:05.156336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.156361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.166196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.166317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.166348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.166360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.166372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.166397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.176219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.176325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.176349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.176363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.176374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.176399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.186495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.186644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.186669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.186682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.186700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.186726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.196235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.196351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.196377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.196390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.196401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.196427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.206357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.206502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.206529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.206542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.206553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.206580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.216286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.216402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.216427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.216440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.216451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.216477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.226599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.226744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.226770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.226783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.226794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.226821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.236337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.236470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.236497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.236510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.236522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.236548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.246455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.246609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.246636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.246649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.246660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.246686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.256389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.256500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.256534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.256546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.256557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.256583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.266704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.266851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.266877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.266890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.266901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.266928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.276475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.276635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.276661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.276674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.276691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.276717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-07-24 19:07:05.286490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-07-24 19:07:05.286615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-07-24 19:07:05.286641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-07-24 19:07:05.286653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-07-24 19:07:05.286664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.289 [2024-07-24 19:07:05.286691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.296618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.296735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.550 [2024-07-24 19:07:05.296760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.550 [2024-07-24 19:07:05.296772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.550 [2024-07-24 19:07:05.296785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.550 [2024-07-24 19:07:05.296811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.550 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.306853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.306998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.550 [2024-07-24 19:07:05.307024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.550 [2024-07-24 19:07:05.307038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.550 [2024-07-24 19:07:05.307049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.550 [2024-07-24 19:07:05.307076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.550 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.316738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.316859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.550 [2024-07-24 19:07:05.316884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.550 [2024-07-24 19:07:05.316897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.550 [2024-07-24 19:07:05.316908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.550 [2024-07-24 19:07:05.316934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.550 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.326705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.326818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.550 [2024-07-24 19:07:05.326849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.550 [2024-07-24 19:07:05.326862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.550 [2024-07-24 19:07:05.326873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.550 [2024-07-24 19:07:05.326899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.550 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.336666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.336803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.550 [2024-07-24 19:07:05.336828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.550 [2024-07-24 19:07:05.336841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.550 [2024-07-24 19:07:05.336852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.550 [2024-07-24 19:07:05.336879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.550 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.347018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.347190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.550 [2024-07-24 19:07:05.347216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.550 [2024-07-24 19:07:05.347229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.550 [2024-07-24 19:07:05.347240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.550 [2024-07-24 19:07:05.347266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.550 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.356747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.356904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.550 [2024-07-24 19:07:05.356930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.550 [2024-07-24 19:07:05.356943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.550 [2024-07-24 19:07:05.356954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.550 [2024-07-24 19:07:05.356980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.550 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.366809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.366971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.550 [2024-07-24 19:07:05.366997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.550 [2024-07-24 19:07:05.367015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.550 [2024-07-24 19:07:05.367026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.550 [2024-07-24 19:07:05.367051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.550 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.376898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.377028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.550 [2024-07-24 19:07:05.377054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.550 [2024-07-24 19:07:05.377066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.550 [2024-07-24 19:07:05.377078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.550 [2024-07-24 19:07:05.377103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.550 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.387131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.387302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.550 [2024-07-24 19:07:05.387328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.550 [2024-07-24 19:07:05.387341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.550 [2024-07-24 19:07:05.387352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.550 [2024-07-24 19:07:05.387380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.550 qpair failed and we were unable to recover it. 00:30:20.550 [2024-07-24 19:07:05.396963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.550 [2024-07-24 19:07:05.397120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.397148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.397161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.397172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.397199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.406957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.407068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.407093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.407106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.407117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.407143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.416990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.417132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.417158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.417170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.417181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.417208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.427294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.427462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.427486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.427498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.427509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.427535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.437079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.437232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.437258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.437270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.437282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.437307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.447114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.447228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.447260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.447274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.447285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.447312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.457175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.457315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.457346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.457359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.457370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.457396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.467416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.467554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.467580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.467593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.467613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.467639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.477250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.477401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.477427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.477439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.477451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.477476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.487204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.487338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.487364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.487376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.487388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.487412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.497285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.497398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.497431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.497444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.497455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.497486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.507473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.507627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.507655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.507667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.507678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.507704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.517406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.517518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.517542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.517554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.551 [2024-07-24 19:07:05.517565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.551 [2024-07-24 19:07:05.517591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.551 qpair failed and we were unable to recover it. 00:30:20.551 [2024-07-24 19:07:05.527439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.551 [2024-07-24 19:07:05.527548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.551 [2024-07-24 19:07:05.527572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.551 [2024-07-24 19:07:05.527584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.552 [2024-07-24 19:07:05.527596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.552 [2024-07-24 19:07:05.527628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.552 qpair failed and we were unable to recover it. 00:30:20.552 [2024-07-24 19:07:05.537478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.552 [2024-07-24 19:07:05.537630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.552 [2024-07-24 19:07:05.537656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.552 [2024-07-24 19:07:05.537668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.552 [2024-07-24 19:07:05.537679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.552 [2024-07-24 19:07:05.537704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.552 qpair failed and we were unable to recover it. 00:30:20.552 [2024-07-24 19:07:05.547736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.552 [2024-07-24 19:07:05.547906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.552 [2024-07-24 19:07:05.547940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.552 [2024-07-24 19:07:05.547953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.552 [2024-07-24 19:07:05.547964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.552 [2024-07-24 19:07:05.547991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.552 qpair failed and we were unable to recover it. 00:30:20.812 [2024-07-24 19:07:05.557564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.812 [2024-07-24 19:07:05.557681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.812 [2024-07-24 19:07:05.557714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.812 [2024-07-24 19:07:05.557727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.812 [2024-07-24 19:07:05.557738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.812 [2024-07-24 19:07:05.557765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.812 qpair failed and we were unable to recover it. 00:30:20.812 [2024-07-24 19:07:05.567579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.812 [2024-07-24 19:07:05.567752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.812 [2024-07-24 19:07:05.567779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.812 [2024-07-24 19:07:05.567792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.812 [2024-07-24 19:07:05.567803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.812 [2024-07-24 19:07:05.567829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.812 qpair failed and we were unable to recover it. 00:30:20.812 [2024-07-24 19:07:05.577586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.812 [2024-07-24 19:07:05.577702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.812 [2024-07-24 19:07:05.577727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.812 [2024-07-24 19:07:05.577739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.812 [2024-07-24 19:07:05.577750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.812 [2024-07-24 19:07:05.577776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.812 qpair failed and we were unable to recover it. 00:30:20.812 [2024-07-24 19:07:05.587817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.812 [2024-07-24 19:07:05.587984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.812 [2024-07-24 19:07:05.588010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.812 [2024-07-24 19:07:05.588022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.812 [2024-07-24 19:07:05.588038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.812 [2024-07-24 19:07:05.588063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.812 qpair failed and we were unable to recover it. 00:30:20.812 [2024-07-24 19:07:05.597633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.812 [2024-07-24 19:07:05.597748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.812 [2024-07-24 19:07:05.597774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.812 [2024-07-24 19:07:05.597787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.812 [2024-07-24 19:07:05.597799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.812 [2024-07-24 19:07:05.597830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.812 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.607690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.607806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.607833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.607845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.607856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.607882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.617755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.617888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.617914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.617926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.617938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.617964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.627944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.628078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.628104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.628116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.628128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.628153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.637794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.637923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.637948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.637961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.637971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.637997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.647859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.647981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.648009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.648022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.648033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.648059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.657820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.657933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.657958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.657971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.657982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.658009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.668121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.668287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.668314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.668326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.668338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.668365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.677928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.678044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.678069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.678082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.678104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.678130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.687977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.688084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.688109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.688122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.688133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.688158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.697954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.698074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.698101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.698114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.698125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.698151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.708258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.708436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.708462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.708474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.708485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.708511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.718051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.718195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.718221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.718233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.718244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.718269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.728091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.813 [2024-07-24 19:07:05.728211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.813 [2024-07-24 19:07:05.728237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.813 [2024-07-24 19:07:05.728250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.813 [2024-07-24 19:07:05.728261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.813 [2024-07-24 19:07:05.728287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.813 qpair failed and we were unable to recover it. 00:30:20.813 [2024-07-24 19:07:05.738106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.814 [2024-07-24 19:07:05.738221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.814 [2024-07-24 19:07:05.738247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.814 [2024-07-24 19:07:05.738259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.814 [2024-07-24 19:07:05.738270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.814 [2024-07-24 19:07:05.738296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.814 qpair failed and we were unable to recover it. 00:30:20.814 [2024-07-24 19:07:05.748325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.814 [2024-07-24 19:07:05.748481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.814 [2024-07-24 19:07:05.748508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.814 [2024-07-24 19:07:05.748521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.814 [2024-07-24 19:07:05.748532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.814 [2024-07-24 19:07:05.748558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.814 qpair failed and we were unable to recover it. 00:30:20.814 [2024-07-24 19:07:05.758134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.814 [2024-07-24 19:07:05.758270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.814 [2024-07-24 19:07:05.758297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.814 [2024-07-24 19:07:05.758310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.814 [2024-07-24 19:07:05.758321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.814 [2024-07-24 19:07:05.758347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.814 qpair failed and we were unable to recover it. 00:30:20.814 [2024-07-24 19:07:05.768143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.814 [2024-07-24 19:07:05.768260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.814 [2024-07-24 19:07:05.768285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.814 [2024-07-24 19:07:05.768303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.814 [2024-07-24 19:07:05.768314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.814 [2024-07-24 19:07:05.768341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.814 qpair failed and we were unable to recover it. 00:30:20.814 [2024-07-24 19:07:05.778205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.814 [2024-07-24 19:07:05.778351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.814 [2024-07-24 19:07:05.778377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.814 [2024-07-24 19:07:05.778390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.814 [2024-07-24 19:07:05.778400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.814 [2024-07-24 19:07:05.778427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.814 qpair failed and we were unable to recover it. 00:30:20.814 [2024-07-24 19:07:05.788429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.814 [2024-07-24 19:07:05.788594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.814 [2024-07-24 19:07:05.788627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.814 [2024-07-24 19:07:05.788640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.814 [2024-07-24 19:07:05.788651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.814 [2024-07-24 19:07:05.788678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.814 qpair failed and we were unable to recover it. 00:30:20.814 [2024-07-24 19:07:05.798291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.814 [2024-07-24 19:07:05.798411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.814 [2024-07-24 19:07:05.798437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.814 [2024-07-24 19:07:05.798450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.814 [2024-07-24 19:07:05.798461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.814 [2024-07-24 19:07:05.798486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.814 qpair failed and we were unable to recover it. 00:30:20.814 [2024-07-24 19:07:05.808296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.814 [2024-07-24 19:07:05.808402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.814 [2024-07-24 19:07:05.808427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.814 [2024-07-24 19:07:05.808440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.814 [2024-07-24 19:07:05.808451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.814 [2024-07-24 19:07:05.808477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.814 qpair failed and we were unable to recover it. 00:30:20.814 [2024-07-24 19:07:05.818429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.814 [2024-07-24 19:07:05.818565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.814 [2024-07-24 19:07:05.818591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.814 [2024-07-24 19:07:05.818609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.814 [2024-07-24 19:07:05.818621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:20.814 [2024-07-24 19:07:05.818647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:20.814 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-24 19:07:05.828594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.074 [2024-07-24 19:07:05.828751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.074 [2024-07-24 19:07:05.828778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.074 [2024-07-24 19:07:05.828791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.074 [2024-07-24 19:07:05.828802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.074 [2024-07-24 19:07:05.828827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-24 19:07:05.838442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.074 [2024-07-24 19:07:05.838557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.074 [2024-07-24 19:07:05.838582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.074 [2024-07-24 19:07:05.838594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.074 [2024-07-24 19:07:05.838611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.074 [2024-07-24 19:07:05.838637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-24 19:07:05.848478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.074 [2024-07-24 19:07:05.848645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.074 [2024-07-24 19:07:05.848672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.074 [2024-07-24 19:07:05.848685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.074 [2024-07-24 19:07:05.848696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.074 [2024-07-24 19:07:05.848723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-24 19:07:05.858546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.074 [2024-07-24 19:07:05.858675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.074 [2024-07-24 19:07:05.858706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.074 [2024-07-24 19:07:05.858719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.074 [2024-07-24 19:07:05.858731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.074 [2024-07-24 19:07:05.858756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-24 19:07:05.868704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.074 [2024-07-24 19:07:05.868841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.074 [2024-07-24 19:07:05.868868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.074 [2024-07-24 19:07:05.868881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.074 [2024-07-24 19:07:05.868892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.074 [2024-07-24 19:07:05.868918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-24 19:07:05.878553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.074 [2024-07-24 19:07:05.878670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.074 [2024-07-24 19:07:05.878695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.074 [2024-07-24 19:07:05.878707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.074 [2024-07-24 19:07:05.878718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.074 [2024-07-24 19:07:05.878744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-24 19:07:05.888641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.074 [2024-07-24 19:07:05.888792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.074 [2024-07-24 19:07:05.888818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.074 [2024-07-24 19:07:05.888831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.074 [2024-07-24 19:07:05.888842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.074 [2024-07-24 19:07:05.888868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-24 19:07:05.898631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.074 [2024-07-24 19:07:05.898751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.898778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.898791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.898802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.898834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:05.908828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:05.908972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.908999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.909011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.909023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.909048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:05.918632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:05.918749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.918774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.918787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.918799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.918833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:05.928756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:05.928900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.928926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.928939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.928950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.928976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:05.938755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:05.938870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.938901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.938914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.938926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.938951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:05.949030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:05.949174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.949205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.949218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.949229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.949255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:05.958802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:05.958950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.958977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.958989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.959001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.959027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:05.968752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:05.968864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.968889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.968902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.968913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.968946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:05.978930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:05.979054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.979080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.979092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.979103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.979129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:05.989099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:05.989234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.989260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.989272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.989284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.989314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:05.998951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:05.999073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:05.999101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:05.999114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:05.999126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:05.999151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:06.009015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:06.009133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:06.009160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:06.009173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:06.009184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:06.009210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:06.018998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:06.019111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:06.019137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.075 [2024-07-24 19:07:06.019149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.075 [2024-07-24 19:07:06.019161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.075 [2024-07-24 19:07:06.019194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-24 19:07:06.029236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.075 [2024-07-24 19:07:06.029411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.075 [2024-07-24 19:07:06.029436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.076 [2024-07-24 19:07:06.029449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.076 [2024-07-24 19:07:06.029460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.076 [2024-07-24 19:07:06.029485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-24 19:07:06.039062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.076 [2024-07-24 19:07:06.039184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.076 [2024-07-24 19:07:06.039209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.076 [2024-07-24 19:07:06.039221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.076 [2024-07-24 19:07:06.039233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.076 [2024-07-24 19:07:06.039259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-24 19:07:06.049092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.076 [2024-07-24 19:07:06.049200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.076 [2024-07-24 19:07:06.049226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.076 [2024-07-24 19:07:06.049239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.076 [2024-07-24 19:07:06.049250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.076 [2024-07-24 19:07:06.049275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-24 19:07:06.059123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.076 [2024-07-24 19:07:06.059262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.076 [2024-07-24 19:07:06.059289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.076 [2024-07-24 19:07:06.059302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.076 [2024-07-24 19:07:06.059313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.076 [2024-07-24 19:07:06.059340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-24 19:07:06.069346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.076 [2024-07-24 19:07:06.069486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.076 [2024-07-24 19:07:06.069514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.076 [2024-07-24 19:07:06.069526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.076 [2024-07-24 19:07:06.069537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.076 [2024-07-24 19:07:06.069563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-24 19:07:06.079245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.076 [2024-07-24 19:07:06.079368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.076 [2024-07-24 19:07:06.079394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.076 [2024-07-24 19:07:06.079406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.076 [2024-07-24 19:07:06.079422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.076 [2024-07-24 19:07:06.079449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.336 [2024-07-24 19:07:06.089288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.336 [2024-07-24 19:07:06.089451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.336 [2024-07-24 19:07:06.089477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.336 [2024-07-24 19:07:06.089490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.336 [2024-07-24 19:07:06.089501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.336 [2024-07-24 19:07:06.089527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.336 qpair failed and we were unable to recover it. 00:30:21.336 [2024-07-24 19:07:06.099268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.336 [2024-07-24 19:07:06.099384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.336 [2024-07-24 19:07:06.099414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.336 [2024-07-24 19:07:06.099427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.336 [2024-07-24 19:07:06.099439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.336 [2024-07-24 19:07:06.099464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.336 qpair failed and we were unable to recover it. 00:30:21.336 [2024-07-24 19:07:06.109514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.336 [2024-07-24 19:07:06.109662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.336 [2024-07-24 19:07:06.109691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.336 [2024-07-24 19:07:06.109704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.336 [2024-07-24 19:07:06.109716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.336 [2024-07-24 19:07:06.109743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.336 qpair failed and we were unable to recover it. 00:30:21.336 [2024-07-24 19:07:06.119389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.336 [2024-07-24 19:07:06.119506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.336 [2024-07-24 19:07:06.119531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.336 [2024-07-24 19:07:06.119543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.336 [2024-07-24 19:07:06.119554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.336 [2024-07-24 19:07:06.119579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.336 qpair failed and we were unable to recover it. 00:30:21.336 [2024-07-24 19:07:06.129314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.336 [2024-07-24 19:07:06.129435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.336 [2024-07-24 19:07:06.129462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.336 [2024-07-24 19:07:06.129475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.336 [2024-07-24 19:07:06.129486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.336 [2024-07-24 19:07:06.129512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.336 qpair failed and we were unable to recover it. 00:30:21.336 [2024-07-24 19:07:06.139335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.336 [2024-07-24 19:07:06.139444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.336 [2024-07-24 19:07:06.139469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.336 [2024-07-24 19:07:06.139483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.336 [2024-07-24 19:07:06.139494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.336 [2024-07-24 19:07:06.139527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.336 qpair failed and we were unable to recover it. 00:30:21.336 [2024-07-24 19:07:06.149662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.336 [2024-07-24 19:07:06.149808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.336 [2024-07-24 19:07:06.149835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.336 [2024-07-24 19:07:06.149848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.336 [2024-07-24 19:07:06.149859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.336 [2024-07-24 19:07:06.149885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.336 qpair failed and we were unable to recover it. 00:30:21.336 [2024-07-24 19:07:06.159481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.336 [2024-07-24 19:07:06.159600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.336 [2024-07-24 19:07:06.159634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.336 [2024-07-24 19:07:06.159646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.336 [2024-07-24 19:07:06.159658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.336 [2024-07-24 19:07:06.159684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.336 qpair failed and we were unable to recover it. 00:30:21.336 [2024-07-24 19:07:06.169481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.336 [2024-07-24 19:07:06.169629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.336 [2024-07-24 19:07:06.169655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.336 [2024-07-24 19:07:06.169672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.336 [2024-07-24 19:07:06.169684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.336 [2024-07-24 19:07:06.169710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.336 qpair failed and we were unable to recover it. 00:30:21.336 [2024-07-24 19:07:06.179463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.336 [2024-07-24 19:07:06.179571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.179595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.179614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.179626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.179652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.189810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.189985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.190011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.190023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.190035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.190061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.199601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.199728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.199759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.199772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.199783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.199809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.209697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.209814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.209840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.209853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.209865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.209891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.219691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.219809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.219835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.219848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.219858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.219884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.229942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.230076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.230101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.230113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.230125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.230151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.239768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.239919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.239944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.239957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.239968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.239994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.249798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.249911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.249936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.249948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.249959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.249985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.259858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.259976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.260002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.260023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.260035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.260065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.270095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.270267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.270294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.270306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.270318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.270344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.279899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.280050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.280075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.280088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.280099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.280126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.289869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.290031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.290057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.290071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.290082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.290108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.299959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.337 [2024-07-24 19:07:06.300065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.337 [2024-07-24 19:07:06.300089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.337 [2024-07-24 19:07:06.300102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.337 [2024-07-24 19:07:06.300113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.337 [2024-07-24 19:07:06.300139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.337 qpair failed and we were unable to recover it. 00:30:21.337 [2024-07-24 19:07:06.310238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.338 [2024-07-24 19:07:06.310396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.338 [2024-07-24 19:07:06.310423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.338 [2024-07-24 19:07:06.310436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.338 [2024-07-24 19:07:06.310447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.338 [2024-07-24 19:07:06.310472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.338 qpair failed and we were unable to recover it. 00:30:21.338 [2024-07-24 19:07:06.320054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.338 [2024-07-24 19:07:06.320166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.338 [2024-07-24 19:07:06.320190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.338 [2024-07-24 19:07:06.320204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.338 [2024-07-24 19:07:06.320215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.338 [2024-07-24 19:07:06.320241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.338 qpair failed and we were unable to recover it. 00:30:21.338 [2024-07-24 19:07:06.330039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.338 [2024-07-24 19:07:06.330149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.338 [2024-07-24 19:07:06.330173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.338 [2024-07-24 19:07:06.330186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.338 [2024-07-24 19:07:06.330197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.338 [2024-07-24 19:07:06.330222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.338 qpair failed and we were unable to recover it. 00:30:21.338 [2024-07-24 19:07:06.340053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.338 [2024-07-24 19:07:06.340191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.338 [2024-07-24 19:07:06.340217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.338 [2024-07-24 19:07:06.340230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.338 [2024-07-24 19:07:06.340242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.338 [2024-07-24 19:07:06.340268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.338 qpair failed and we were unable to recover it. 00:30:21.598 [2024-07-24 19:07:06.350327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.598 [2024-07-24 19:07:06.350479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.598 [2024-07-24 19:07:06.350510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.598 [2024-07-24 19:07:06.350523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.598 [2024-07-24 19:07:06.350534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.598 [2024-07-24 19:07:06.350561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-07-24 19:07:06.360178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.598 [2024-07-24 19:07:06.360323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.598 [2024-07-24 19:07:06.360349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.598 [2024-07-24 19:07:06.360362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.598 [2024-07-24 19:07:06.360373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.598 [2024-07-24 19:07:06.360398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-07-24 19:07:06.370212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.598 [2024-07-24 19:07:06.370318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.598 [2024-07-24 19:07:06.370344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.598 [2024-07-24 19:07:06.370356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.598 [2024-07-24 19:07:06.370367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.598 [2024-07-24 19:07:06.370394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-07-24 19:07:06.380222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.598 [2024-07-24 19:07:06.380333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.598 [2024-07-24 19:07:06.380357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.598 [2024-07-24 19:07:06.380370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.598 [2024-07-24 19:07:06.380381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.598 [2024-07-24 19:07:06.380407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-07-24 19:07:06.390457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.598 [2024-07-24 19:07:06.390598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.598 [2024-07-24 19:07:06.390631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.598 [2024-07-24 19:07:06.390644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.598 [2024-07-24 19:07:06.390654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.598 [2024-07-24 19:07:06.390685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-07-24 19:07:06.400335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.598 [2024-07-24 19:07:06.400468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.598 [2024-07-24 19:07:06.400496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.598 [2024-07-24 19:07:06.400509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.598 [2024-07-24 19:07:06.400520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.598 [2024-07-24 19:07:06.400546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-07-24 19:07:06.410318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.598 [2024-07-24 19:07:06.410468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.598 [2024-07-24 19:07:06.410495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.598 [2024-07-24 19:07:06.410507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.598 [2024-07-24 19:07:06.410519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.598 [2024-07-24 19:07:06.410544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-07-24 19:07:06.420354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.598 [2024-07-24 19:07:06.420466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.598 [2024-07-24 19:07:06.420493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.598 [2024-07-24 19:07:06.420505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.598 [2024-07-24 19:07:06.420516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.598 [2024-07-24 19:07:06.420542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-07-24 19:07:06.430626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.598 [2024-07-24 19:07:06.430773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.598 [2024-07-24 19:07:06.430799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.598 [2024-07-24 19:07:06.430812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.598 [2024-07-24 19:07:06.430823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.598 [2024-07-24 19:07:06.430848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.598 qpair failed and we were unable to recover it. 00:30:21.598 [2024-07-24 19:07:06.440392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.440545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.440574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.440587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.440598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.440631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.450465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.450627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.450654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.450667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.450678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.450703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.460526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.460652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.460704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.460717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.460728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.460754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.470737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.470878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.470904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.470916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.470927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.470953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.480577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.480709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.480733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.480746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.480818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.480844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.490597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.490721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.490745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.490757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.490768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.490793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.500677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.500823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.500848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.500860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.500871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.500896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.510854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.511014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.511040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.511052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.511063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.511091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.520726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.520858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.520883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.520895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.520906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.520931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.530710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.530853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.530878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.530890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.530901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.530925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.540793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.540915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.540940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.540953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.540964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.540989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.551036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.551246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.551273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.551286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.551297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.551324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.560833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.560969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.599 [2024-07-24 19:07:06.560993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.599 [2024-07-24 19:07:06.561005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.599 [2024-07-24 19:07:06.561016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.599 [2024-07-24 19:07:06.561041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.599 qpair failed and we were unable to recover it. 00:30:21.599 [2024-07-24 19:07:06.570811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.599 [2024-07-24 19:07:06.570918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.600 [2024-07-24 19:07:06.570942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.600 [2024-07-24 19:07:06.570959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.600 [2024-07-24 19:07:06.570970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.600 [2024-07-24 19:07:06.570995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-07-24 19:07:06.580896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.600 [2024-07-24 19:07:06.581029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.600 [2024-07-24 19:07:06.581054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.600 [2024-07-24 19:07:06.581066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.600 [2024-07-24 19:07:06.581077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.600 [2024-07-24 19:07:06.581102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-07-24 19:07:06.591017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.600 [2024-07-24 19:07:06.591153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.600 [2024-07-24 19:07:06.591178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.600 [2024-07-24 19:07:06.591191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.600 [2024-07-24 19:07:06.591201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.600 [2024-07-24 19:07:06.591227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.600 [2024-07-24 19:07:06.600927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.600 [2024-07-24 19:07:06.601049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.600 [2024-07-24 19:07:06.601074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.600 [2024-07-24 19:07:06.601086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.600 [2024-07-24 19:07:06.601097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.600 [2024-07-24 19:07:06.601122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.600 qpair failed and we were unable to recover it. 00:30:21.859 [2024-07-24 19:07:06.610980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.859 [2024-07-24 19:07:06.611140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.859 [2024-07-24 19:07:06.611166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.859 [2024-07-24 19:07:06.611178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.611189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.611216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.620905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.621018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.621043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.621056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.621066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.621091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.631156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.631299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.631324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.631337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.631347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.631372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.641066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.641254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.641279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.641292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.641303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.641329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.651015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.651161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.651187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.651199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.651209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.651235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.661056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.661202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.661227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.661245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.661256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.661281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.671290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.671434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.671460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.671472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.671482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.671507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.681186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.681311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.681335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.681347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.681358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.681382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.691175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.691289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.691313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.691325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.691336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.691360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.701222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.701336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.701362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.701375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.701389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.701415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.711498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.711668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.711694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.711706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.711717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.711743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.721341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.721453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.721479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.860 [2024-07-24 19:07:06.721491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.860 [2024-07-24 19:07:06.721502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.860 [2024-07-24 19:07:06.721527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.860 qpair failed and we were unable to recover it. 00:30:21.860 [2024-07-24 19:07:06.731389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.860 [2024-07-24 19:07:06.731506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.860 [2024-07-24 19:07:06.731531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.731544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.731554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.731579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.741388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.741579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.741613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.741626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.741637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.741663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.751664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.751801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.751831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.751844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.751854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.751880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.761419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.761544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.761568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.761580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.761591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.761625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.771509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.771631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.771655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.771668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.771679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.771704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.781521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.781641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.781666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.781678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.781689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.781714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.791764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.791906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.791931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.791943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.791954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.791984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.801532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.801661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.801686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.801699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.801709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.801735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.811572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.811724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.811749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.811761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.811772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.811798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.821587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.821711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.821735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.821748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.821758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.821783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.831914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.832121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.832155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.832167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.832179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.832204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.841797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.841910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.841939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.841951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.841962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.841986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.851766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.861 [2024-07-24 19:07:06.851920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.861 [2024-07-24 19:07:06.851945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.861 [2024-07-24 19:07:06.851957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.861 [2024-07-24 19:07:06.851968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.861 [2024-07-24 19:07:06.851993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.861 qpair failed and we were unable to recover it. 00:30:21.861 [2024-07-24 19:07:06.861776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.862 [2024-07-24 19:07:06.861897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.862 [2024-07-24 19:07:06.861921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.862 [2024-07-24 19:07:06.861933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.862 [2024-07-24 19:07:06.861944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:21.862 [2024-07-24 19:07:06.861968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.862 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.872061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.872244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.872268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.872281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.872292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.872317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.881866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.881980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.882006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.882017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.882033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.882058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.891904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.892022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.892046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.892058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.892069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.892094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.901865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.902006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.902031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.902043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.902054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.902079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.912088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.912224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.912249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.912262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.912273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.912298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.921942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.922094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.922119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.922131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.922143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.922168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.932034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.932194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.932219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.932231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.932242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.932267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.942034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.942161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.942186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.942198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.942208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.942234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.952267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.952433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.952458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.952471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.952481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.952507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.962137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.962264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.962289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.962302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.962313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.962338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.972174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.972336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.972360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.972372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.122 [2024-07-24 19:07:06.972388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.122 [2024-07-24 19:07:06.972414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.122 qpair failed and we were unable to recover it. 00:30:22.122 [2024-07-24 19:07:06.982178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.122 [2024-07-24 19:07:06.982336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.122 [2024-07-24 19:07:06.982360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.122 [2024-07-24 19:07:06.982372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:06.982384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:06.982409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:06.992415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:06.992551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:06.992575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:06.992588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:06.992599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:06.992634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.002328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.002472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.002499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.002511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.002523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.002548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.012328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.012437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.012463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.012476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.012487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.012513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.022290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.022424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.022450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.022462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.022474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.022500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.032579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.032736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.032762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.032775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.032786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.032810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.042428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.042635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.042660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.042673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.042683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.042709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.052455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.052600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.052633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.052644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.052655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.052680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.062461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.062597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.062631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.062648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.062660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.062686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.072689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.072859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.072884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.072897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.072908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.072934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.082531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.082658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.082685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.082697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.082708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.082733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.092583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.092710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.092735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.092747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.092758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.092783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.123 qpair failed and we were unable to recover it. 00:30:22.123 [2024-07-24 19:07:07.102618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.123 [2024-07-24 19:07:07.102730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.123 [2024-07-24 19:07:07.102756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.123 [2024-07-24 19:07:07.102768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.123 [2024-07-24 19:07:07.102779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.123 [2024-07-24 19:07:07.102805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.124 qpair failed and we were unable to recover it. 00:30:22.124 [2024-07-24 19:07:07.112861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.124 [2024-07-24 19:07:07.113043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.124 [2024-07-24 19:07:07.113067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.124 [2024-07-24 19:07:07.113079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.124 [2024-07-24 19:07:07.113091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.124 [2024-07-24 19:07:07.113117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.124 qpair failed and we were unable to recover it. 00:30:22.124 [2024-07-24 19:07:07.122679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.124 [2024-07-24 19:07:07.122798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.124 [2024-07-24 19:07:07.122823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.124 [2024-07-24 19:07:07.122835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.124 [2024-07-24 19:07:07.122846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.124 [2024-07-24 19:07:07.122871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.124 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.132673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.132787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.132814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.132826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.132838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.132863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.142719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.142834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.142859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.142872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.142882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.142907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.152980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.153153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.153183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.153195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.153207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.153233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.162814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.162931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.162955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.162967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.162978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.163003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.172790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.172900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.172924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.172937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.172947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.172973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.182851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.182995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.183020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.183032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.183043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.183068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.193083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.193217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.193243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.193255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.193266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.193295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.202901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.203018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.203042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.203054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.203065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.203091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.212970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.213089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.213114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.213127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.213137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.213163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.222991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.223100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.223125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.223137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.223148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.223172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.233211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.384 [2024-07-24 19:07:07.233348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.384 [2024-07-24 19:07:07.233372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.384 [2024-07-24 19:07:07.233385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.384 [2024-07-24 19:07:07.233395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.384 [2024-07-24 19:07:07.233420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.384 qpair failed and we were unable to recover it. 00:30:22.384 [2024-07-24 19:07:07.243050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.243171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.243201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.243213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.243224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.243248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.253094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.253235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.253260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.253273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.253284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.253309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.263121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.263240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.263264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.263276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.263287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.263312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.273342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.273484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.273509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.273520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.273532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.273557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.283182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.283292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.283318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.283330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.283346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.283370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.293204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.293343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.293368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.293381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.293392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.293416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.303298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.303415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.303440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.303452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.303463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.303488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.313484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.313670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.313696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.313708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.313720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.313746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.323288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.323413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.323439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.323451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.323462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.323487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.333323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.333444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.333469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.333482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.333493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.333517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.343341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.343462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.343487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.343500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.343511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.343536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.353558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.353743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.353769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.353781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.353792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.353819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.363486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.363628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.363653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.385 [2024-07-24 19:07:07.363665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.385 [2024-07-24 19:07:07.363677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.385 [2024-07-24 19:07:07.363702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.385 qpair failed and we were unable to recover it. 00:30:22.385 [2024-07-24 19:07:07.373436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.385 [2024-07-24 19:07:07.373550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.385 [2024-07-24 19:07:07.373574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.386 [2024-07-24 19:07:07.373586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.386 [2024-07-24 19:07:07.373609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.386 [2024-07-24 19:07:07.373636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.386 qpair failed and we were unable to recover it. 00:30:22.386 [2024-07-24 19:07:07.383494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.386 [2024-07-24 19:07:07.383627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.386 [2024-07-24 19:07:07.383653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.386 [2024-07-24 19:07:07.383666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.386 [2024-07-24 19:07:07.383676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.386 [2024-07-24 19:07:07.383702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.386 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.393698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.393849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.393874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.393886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.393897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.393922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.403525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.403653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.403679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.403692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.403702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.403727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.413569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.413688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.413714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.413726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.413737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.413763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.423633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.423807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.423832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.423844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.423856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.423882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.433866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.434001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.434026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.434038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.434050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.434075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.443772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.443922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.443946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.443958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.443969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.443993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.453732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.453846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.453872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.453884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.453895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.453921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.463766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.463883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.463908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.463925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.463936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.463962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.474058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.474241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.474265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.474277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.474288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.474313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.483884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.483998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.484023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.484036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.484047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.484071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.493868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.493986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.494011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.494023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.494034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.494059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.503866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.503975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.504001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.504013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.504024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.504049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.514192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.514339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.514364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.514377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.514388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.514413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.524028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.524152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.524176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.524189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.524200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.524225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.533996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.534137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.534161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.534173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.534184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.534208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.544044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.544181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.544206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.544218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.544228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.544254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.554276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.554444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.554473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.554486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.554497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.554523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.564128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.646 [2024-07-24 19:07:07.564238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.646 [2024-07-24 19:07:07.564262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.646 [2024-07-24 19:07:07.564274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.646 [2024-07-24 19:07:07.564286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.646 [2024-07-24 19:07:07.564311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-07-24 19:07:07.574130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.647 [2024-07-24 19:07:07.574267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.647 [2024-07-24 19:07:07.574292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.647 [2024-07-24 19:07:07.574305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.647 [2024-07-24 19:07:07.574315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.647 [2024-07-24 19:07:07.574340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-07-24 19:07:07.584165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.647 [2024-07-24 19:07:07.584284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.647 [2024-07-24 19:07:07.584309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.647 [2024-07-24 19:07:07.584321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.647 [2024-07-24 19:07:07.584333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.647 [2024-07-24 19:07:07.584357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-07-24 19:07:07.594411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.647 [2024-07-24 19:07:07.594573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.647 [2024-07-24 19:07:07.594597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.647 [2024-07-24 19:07:07.594617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.647 [2024-07-24 19:07:07.594628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.647 [2024-07-24 19:07:07.594659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-07-24 19:07:07.604252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.647 [2024-07-24 19:07:07.604366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.647 [2024-07-24 19:07:07.604393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.647 [2024-07-24 19:07:07.604405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.647 [2024-07-24 19:07:07.604416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.647 [2024-07-24 19:07:07.604443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-07-24 19:07:07.614272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.647 [2024-07-24 19:07:07.614442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.647 [2024-07-24 19:07:07.614467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.647 [2024-07-24 19:07:07.614480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.647 [2024-07-24 19:07:07.614491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.647 [2024-07-24 19:07:07.614517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-07-24 19:07:07.624269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.647 [2024-07-24 19:07:07.624411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.647 [2024-07-24 19:07:07.624435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.647 [2024-07-24 19:07:07.624447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.647 [2024-07-24 19:07:07.624458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.647 [2024-07-24 19:07:07.624483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-07-24 19:07:07.634587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.647 [2024-07-24 19:07:07.634768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.647 [2024-07-24 19:07:07.634793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.647 [2024-07-24 19:07:07.634805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.647 [2024-07-24 19:07:07.634816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.647 [2024-07-24 19:07:07.634842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-07-24 19:07:07.644396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.647 [2024-07-24 19:07:07.644540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.647 [2024-07-24 19:07:07.644571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.647 [2024-07-24 19:07:07.644583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.647 [2024-07-24 19:07:07.644593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.647 [2024-07-24 19:07:07.644627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.654405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.906 [2024-07-24 19:07:07.654514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.906 [2024-07-24 19:07:07.654539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.906 [2024-07-24 19:07:07.654551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.906 [2024-07-24 19:07:07.654562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.906 [2024-07-24 19:07:07.654587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.664427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.906 [2024-07-24 19:07:07.664550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.906 [2024-07-24 19:07:07.664576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.906 [2024-07-24 19:07:07.664587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.906 [2024-07-24 19:07:07.664598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.906 [2024-07-24 19:07:07.664632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.674702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.906 [2024-07-24 19:07:07.674844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.906 [2024-07-24 19:07:07.674868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.906 [2024-07-24 19:07:07.674880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.906 [2024-07-24 19:07:07.674891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.906 [2024-07-24 19:07:07.674915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.684511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.906 [2024-07-24 19:07:07.684693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.906 [2024-07-24 19:07:07.684719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.906 [2024-07-24 19:07:07.684731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.906 [2024-07-24 19:07:07.684742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.906 [2024-07-24 19:07:07.684776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.694565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.906 [2024-07-24 19:07:07.694690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.906 [2024-07-24 19:07:07.694716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.906 [2024-07-24 19:07:07.694728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.906 [2024-07-24 19:07:07.694739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.906 [2024-07-24 19:07:07.694764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.704611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.906 [2024-07-24 19:07:07.704759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.906 [2024-07-24 19:07:07.704785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.906 [2024-07-24 19:07:07.704797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.906 [2024-07-24 19:07:07.704808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.906 [2024-07-24 19:07:07.704833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.714793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.906 [2024-07-24 19:07:07.714935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.906 [2024-07-24 19:07:07.714962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.906 [2024-07-24 19:07:07.714974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.906 [2024-07-24 19:07:07.714986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.906 [2024-07-24 19:07:07.715010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.724637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.906 [2024-07-24 19:07:07.724757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.906 [2024-07-24 19:07:07.724781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.906 [2024-07-24 19:07:07.724794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.906 [2024-07-24 19:07:07.724805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.906 [2024-07-24 19:07:07.724830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.734636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.906 [2024-07-24 19:07:07.734797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.906 [2024-07-24 19:07:07.734822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.906 [2024-07-24 19:07:07.734834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.906 [2024-07-24 19:07:07.734846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.906 [2024-07-24 19:07:07.734871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.744705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.906 [2024-07-24 19:07:07.744845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.906 [2024-07-24 19:07:07.744869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.906 [2024-07-24 19:07:07.744881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.906 [2024-07-24 19:07:07.744892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.906 [2024-07-24 19:07:07.744918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-07-24 19:07:07.754980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.755118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.755144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.755156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.755167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.755192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.764820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.764947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.764972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.764985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.764996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.765021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.774823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.774961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.774986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.774998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.775013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.775039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.784838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.784949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.784974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.784986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.784997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.785022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.795109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.795281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.795305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.795318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.795328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.795354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.804951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.805104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.805129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.805140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.805151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.805176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.814934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.815046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.815071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.815083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.815094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.815119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.825009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.825144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.825169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.825181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.825192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.825217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.835261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.835402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.835427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.835439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.835450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.835475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.845027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.845136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.845160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.845173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.845184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.845209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.855024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.855135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.855160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.855173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.855183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.855209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.865095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.865209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.865234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.865251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.865262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.865287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-07-24 19:07:07.875341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.907 [2024-07-24 19:07:07.875485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.907 [2024-07-24 19:07:07.875510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.907 [2024-07-24 19:07:07.875521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.907 [2024-07-24 19:07:07.875532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.907 [2024-07-24 19:07:07.875557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.908 [2024-07-24 19:07:07.885173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.908 [2024-07-24 19:07:07.885300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.908 [2024-07-24 19:07:07.885326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.908 [2024-07-24 19:07:07.885338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.908 [2024-07-24 19:07:07.885349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.908 [2024-07-24 19:07:07.885374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-07-24 19:07:07.895185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.908 [2024-07-24 19:07:07.895326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.908 [2024-07-24 19:07:07.895351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.908 [2024-07-24 19:07:07.895363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.908 [2024-07-24 19:07:07.895373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.908 [2024-07-24 19:07:07.895398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-07-24 19:07:07.905217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.908 [2024-07-24 19:07:07.905334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.908 [2024-07-24 19:07:07.905359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.908 [2024-07-24 19:07:07.905372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.908 [2024-07-24 19:07:07.905382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:22.908 [2024-07-24 19:07:07.905408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.908 qpair failed and we were unable to recover it. 00:30:23.167 [2024-07-24 19:07:07.915425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.167 [2024-07-24 19:07:07.915561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.167 [2024-07-24 19:07:07.915587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.167 [2024-07-24 19:07:07.915599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.167 [2024-07-24 19:07:07.915619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.167 [2024-07-24 19:07:07.915644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.167 qpair failed and we were unable to recover it. 00:30:23.167 [2024-07-24 19:07:07.925252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.167 [2024-07-24 19:07:07.925401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:07.925426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:07.925438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:07.925449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:07.925475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:07.935275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:07.935412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:07.935437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:07.935449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:07.935460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:07.935485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:07.945336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:07.945458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:07.945483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:07.945495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:07.945506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:07.945532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:07.955584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:07.955736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:07.955761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:07.955777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:07.955788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:07.955814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:07.965351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:07.965507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:07.965531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:07.965543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:07.965554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:07.965580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:07.975416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:07.975535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:07.975559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:07.975571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:07.975582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:07.975615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:07.985464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:07.985613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:07.985637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:07.985649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:07.985660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:07.985687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:07.995738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:07.995876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:07.995901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:07.995913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:07.995924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:07.995948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:08.005556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:08.005707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:08.005734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:08.005747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:08.005758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:08.005784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:08.015500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:08.015626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:08.015652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:08.015664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:08.015675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:08.015700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:08.025619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:08.025757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:08.025782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:08.025794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:08.025806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:08.025832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:08.035838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:08.035980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:08.036004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:08.036016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:08.036027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:08.036052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:08.045696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:08.045848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:08.045877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.168 [2024-07-24 19:07:08.045889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.168 [2024-07-24 19:07:08.045900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.168 [2024-07-24 19:07:08.045926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.168 qpair failed and we were unable to recover it. 00:30:23.168 [2024-07-24 19:07:08.055689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.168 [2024-07-24 19:07:08.055806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.168 [2024-07-24 19:07:08.055831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.055844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.055855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.055880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.065675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.065837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.065864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.065876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.065887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.065913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.075983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.076124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.076151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.076164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.076175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.076200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.085809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.085927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.085952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.085964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.085975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.086005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.095831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.095967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.095991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.096004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.096015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.096040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.105837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.105984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.106010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.106024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.106034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.106061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.116162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.116299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.116324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.116337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.116347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.116373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.125908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.126021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.126046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.126059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.126070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.126095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.135994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.136127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.136158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.136172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.136183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.136207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.145992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.146107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.146132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.146145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.146156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.146182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.156178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.156347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.156374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.156387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.156398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.156423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.169 [2024-07-24 19:07:08.166034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.169 [2024-07-24 19:07:08.166146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.169 [2024-07-24 19:07:08.166172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.169 [2024-07-24 19:07:08.166185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.169 [2024-07-24 19:07:08.166196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.169 [2024-07-24 19:07:08.166221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.169 qpair failed and we were unable to recover it. 00:30:23.429 [2024-07-24 19:07:08.176114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.429 [2024-07-24 19:07:08.176264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.429 [2024-07-24 19:07:08.176288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.429 [2024-07-24 19:07:08.176299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.429 [2024-07-24 19:07:08.176317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.429 [2024-07-24 19:07:08.176342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.429 qpair failed and we were unable to recover it. 00:30:23.429 [2024-07-24 19:07:08.186144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.429 [2024-07-24 19:07:08.186268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.429 [2024-07-24 19:07:08.186292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.429 [2024-07-24 19:07:08.186304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.429 [2024-07-24 19:07:08.186315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.429 [2024-07-24 19:07:08.186340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.429 qpair failed and we were unable to recover it. 00:30:23.429 [2024-07-24 19:07:08.196309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.429 [2024-07-24 19:07:08.196451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.429 [2024-07-24 19:07:08.196475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.429 [2024-07-24 19:07:08.196487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.429 [2024-07-24 19:07:08.196498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.429 [2024-07-24 19:07:08.196523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.429 qpair failed and we were unable to recover it. 00:30:23.429 [2024-07-24 19:07:08.206198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.429 [2024-07-24 19:07:08.206318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.429 [2024-07-24 19:07:08.206343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.429 [2024-07-24 19:07:08.206356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.429 [2024-07-24 19:07:08.206366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.429 [2024-07-24 19:07:08.206391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.429 qpair failed and we were unable to recover it. 00:30:23.429 [2024-07-24 19:07:08.216171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.429 [2024-07-24 19:07:08.216286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.429 [2024-07-24 19:07:08.216311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.429 [2024-07-24 19:07:08.216324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.429 [2024-07-24 19:07:08.216335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.429 [2024-07-24 19:07:08.216360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.429 qpair failed and we were unable to recover it. 00:30:23.429 [2024-07-24 19:07:08.226270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.429 [2024-07-24 19:07:08.226387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.429 [2024-07-24 19:07:08.226412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.429 [2024-07-24 19:07:08.226425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.429 [2024-07-24 19:07:08.226436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.429 [2024-07-24 19:07:08.226461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.429 qpair failed and we were unable to recover it. 00:30:23.429 [2024-07-24 19:07:08.236517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.429 [2024-07-24 19:07:08.236660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.429 [2024-07-24 19:07:08.236685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.429 [2024-07-24 19:07:08.236697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.429 [2024-07-24 19:07:08.236708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.429 [2024-07-24 19:07:08.236733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.429 qpair failed and we were unable to recover it. 00:30:23.429 [2024-07-24 19:07:08.246355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.429 [2024-07-24 19:07:08.246532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.429 [2024-07-24 19:07:08.246557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.429 [2024-07-24 19:07:08.246569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.429 [2024-07-24 19:07:08.246580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.429 [2024-07-24 19:07:08.246613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.429 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.256391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.256535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.256560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.256573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.256583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.256617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.266358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.266473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.266498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.266515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.266527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.266552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.276632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.276776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.276801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.276813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.276824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.276850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.286544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.286673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.286698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.286710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.286721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.286746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.296466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.296615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.296640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.296652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.296663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.296688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.306479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.306615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.306641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.306653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.306664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.306690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.316722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.316875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.316899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.316911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.316922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.316948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.326613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.326725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.326750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.326763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.326774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.326800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.336608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.336767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.336792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.336804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.336815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.336840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.346634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.346753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.346779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.346792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.346802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.346828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.356854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.356997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.357023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.357040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.357051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.357076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.366777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.366908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.366933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.366945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.366956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.366981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.430 qpair failed and we were unable to recover it. 00:30:23.430 [2024-07-24 19:07:08.376759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.430 [2024-07-24 19:07:08.376938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.430 [2024-07-24 19:07:08.376963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.430 [2024-07-24 19:07:08.376975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.430 [2024-07-24 19:07:08.376987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.430 [2024-07-24 19:07:08.377012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.431 qpair failed and we were unable to recover it. 00:30:23.431 [2024-07-24 19:07:08.386825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.431 [2024-07-24 19:07:08.386986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.431 [2024-07-24 19:07:08.387010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.431 [2024-07-24 19:07:08.387022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.431 [2024-07-24 19:07:08.387033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.431 [2024-07-24 19:07:08.387058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.431 qpair failed and we were unable to recover it. 00:30:23.431 [2024-07-24 19:07:08.396967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.431 [2024-07-24 19:07:08.397108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.431 [2024-07-24 19:07:08.397133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.431 [2024-07-24 19:07:08.397146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.431 [2024-07-24 19:07:08.397157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.431 [2024-07-24 19:07:08.397182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.431 qpair failed and we were unable to recover it. 00:30:23.431 [2024-07-24 19:07:08.406808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.431 [2024-07-24 19:07:08.406959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.431 [2024-07-24 19:07:08.406984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.431 [2024-07-24 19:07:08.406997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.431 [2024-07-24 19:07:08.407007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.431 [2024-07-24 19:07:08.407032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.431 qpair failed and we were unable to recover it. 00:30:23.431 [2024-07-24 19:07:08.416913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.431 [2024-07-24 19:07:08.417075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.431 [2024-07-24 19:07:08.417100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.431 [2024-07-24 19:07:08.417112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.431 [2024-07-24 19:07:08.417123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.431 [2024-07-24 19:07:08.417149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.431 qpair failed and we were unable to recover it. 00:30:23.431 [2024-07-24 19:07:08.426920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.431 [2024-07-24 19:07:08.427064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.431 [2024-07-24 19:07:08.427088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.431 [2024-07-24 19:07:08.427100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.431 [2024-07-24 19:07:08.427111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.431 [2024-07-24 19:07:08.427136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.431 qpair failed and we were unable to recover it. 00:30:23.691 [2024-07-24 19:07:08.437231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.691 [2024-07-24 19:07:08.437410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.691 [2024-07-24 19:07:08.437435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.691 [2024-07-24 19:07:08.437448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.691 [2024-07-24 19:07:08.437459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.691 [2024-07-24 19:07:08.437484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.691 qpair failed and we were unable to recover it. 00:30:23.691 [2024-07-24 19:07:08.446989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.691 [2024-07-24 19:07:08.447104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.691 [2024-07-24 19:07:08.447136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.691 [2024-07-24 19:07:08.447148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.691 [2024-07-24 19:07:08.447159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.691 [2024-07-24 19:07:08.447184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.691 qpair failed and we were unable to recover it. 00:30:23.691 [2024-07-24 19:07:08.456976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.691 [2024-07-24 19:07:08.457159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.691 [2024-07-24 19:07:08.457185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.691 [2024-07-24 19:07:08.457198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.691 [2024-07-24 19:07:08.457209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.691 [2024-07-24 19:07:08.457234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.691 qpair failed and we were unable to recover it. 00:30:23.691 [2024-07-24 19:07:08.467064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.691 [2024-07-24 19:07:08.467187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.691 [2024-07-24 19:07:08.467212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.691 [2024-07-24 19:07:08.467224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.691 [2024-07-24 19:07:08.467234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.691 [2024-07-24 19:07:08.467259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.691 qpair failed and we were unable to recover it. 00:30:23.691 [2024-07-24 19:07:08.477353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.691 [2024-07-24 19:07:08.477491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.691 [2024-07-24 19:07:08.477516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.691 [2024-07-24 19:07:08.477528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.691 [2024-07-24 19:07:08.477539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.691 [2024-07-24 19:07:08.477564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.691 qpair failed and we were unable to recover it. 00:30:23.691 [2024-07-24 19:07:08.487169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.691 [2024-07-24 19:07:08.487315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.691 [2024-07-24 19:07:08.487339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.691 [2024-07-24 19:07:08.487352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.691 [2024-07-24 19:07:08.487363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.691 [2024-07-24 19:07:08.487393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.691 qpair failed and we were unable to recover it. 00:30:23.691 [2024-07-24 19:07:08.497184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.691 [2024-07-24 19:07:08.497306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.691 [2024-07-24 19:07:08.497330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.691 [2024-07-24 19:07:08.497342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.691 [2024-07-24 19:07:08.497353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.691 [2024-07-24 19:07:08.497378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.691 qpair failed and we were unable to recover it. 00:30:23.691 [2024-07-24 19:07:08.507220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.691 [2024-07-24 19:07:08.507375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.691 [2024-07-24 19:07:08.507401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.691 [2024-07-24 19:07:08.507413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.691 [2024-07-24 19:07:08.507424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.691 [2024-07-24 19:07:08.507451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.691 qpair failed and we were unable to recover it. 00:30:23.691 [2024-07-24 19:07:08.517444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.517589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.517620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.517633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.517644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.517670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.527261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.527385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.527410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.527423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.527433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.527458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.537281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.537396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.537425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.537437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.537448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.537473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.547322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.547448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.547473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.547485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.547496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.547521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.557597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.557783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.557808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.557820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.557831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.557857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.567392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.567517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.567541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.567554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.567565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.567590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.577469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.577579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.577609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.577622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.577638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.577664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.587478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.587592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.587623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.587636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.587647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.587672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.597736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.597876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.597900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.597912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.597923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.597947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.607520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.607636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.607661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.607673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.607683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.607708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.617585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.617699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.617725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.617738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.617749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.617774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.627592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.627749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.627774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.627786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.627796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.627822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.637768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.637966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.692 [2024-07-24 19:07:08.637990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.692 [2024-07-24 19:07:08.638002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.692 [2024-07-24 19:07:08.638014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.692 [2024-07-24 19:07:08.638039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.692 qpair failed and we were unable to recover it. 00:30:23.692 [2024-07-24 19:07:08.647689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.692 [2024-07-24 19:07:08.647840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.693 [2024-07-24 19:07:08.647865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.693 [2024-07-24 19:07:08.647877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.693 [2024-07-24 19:07:08.647888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.693 [2024-07-24 19:07:08.647913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.693 qpair failed and we were unable to recover it. 00:30:23.693 [2024-07-24 19:07:08.657741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.693 [2024-07-24 19:07:08.657883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.693 [2024-07-24 19:07:08.657907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.693 [2024-07-24 19:07:08.657920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.693 [2024-07-24 19:07:08.657931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.693 [2024-07-24 19:07:08.657956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.693 qpair failed and we were unable to recover it. 00:30:23.693 [2024-07-24 19:07:08.667737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.693 [2024-07-24 19:07:08.667920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.693 [2024-07-24 19:07:08.667944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.693 [2024-07-24 19:07:08.667956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.693 [2024-07-24 19:07:08.667972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.693 [2024-07-24 19:07:08.667998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.693 qpair failed and we were unable to recover it. 00:30:23.693 [2024-07-24 19:07:08.677958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.693 [2024-07-24 19:07:08.678103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.693 [2024-07-24 19:07:08.678128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.693 [2024-07-24 19:07:08.678139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.693 [2024-07-24 19:07:08.678151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.693 [2024-07-24 19:07:08.678174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.693 qpair failed and we were unable to recover it. 00:30:23.693 [2024-07-24 19:07:08.687845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.693 [2024-07-24 19:07:08.688015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.693 [2024-07-24 19:07:08.688039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.693 [2024-07-24 19:07:08.688051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.693 [2024-07-24 19:07:08.688062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.693 [2024-07-24 19:07:08.688086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.693 qpair failed and we were unable to recover it. 00:30:23.693 [2024-07-24 19:07:08.697843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.693 [2024-07-24 19:07:08.697996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.693 [2024-07-24 19:07:08.698020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.693 [2024-07-24 19:07:08.698032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.693 [2024-07-24 19:07:08.698043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.693 [2024-07-24 19:07:08.698069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.693 qpair failed and we were unable to recover it. 00:30:23.953 [2024-07-24 19:07:08.707842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.953 [2024-07-24 19:07:08.707947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.953 [2024-07-24 19:07:08.707973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.953 [2024-07-24 19:07:08.707985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.953 [2024-07-24 19:07:08.707997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.953 [2024-07-24 19:07:08.708022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.953 qpair failed and we were unable to recover it. 00:30:23.953 [2024-07-24 19:07:08.718139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.953 [2024-07-24 19:07:08.718311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.953 [2024-07-24 19:07:08.718335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.953 [2024-07-24 19:07:08.718347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.953 [2024-07-24 19:07:08.718358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.953 [2024-07-24 19:07:08.718384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.953 qpair failed and we were unable to recover it. 00:30:23.953 [2024-07-24 19:07:08.727951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.953 [2024-07-24 19:07:08.728096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.953 [2024-07-24 19:07:08.728120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.953 [2024-07-24 19:07:08.728133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.953 [2024-07-24 19:07:08.728144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.953 [2024-07-24 19:07:08.728169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.953 qpair failed and we were unable to recover it. 00:30:23.953 [2024-07-24 19:07:08.738033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.953 [2024-07-24 19:07:08.738151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.953 [2024-07-24 19:07:08.738175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.953 [2024-07-24 19:07:08.738188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.953 [2024-07-24 19:07:08.738198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.953 [2024-07-24 19:07:08.738223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.953 qpair failed and we were unable to recover it. 00:30:23.953 [2024-07-24 19:07:08.748020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.953 [2024-07-24 19:07:08.748177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.953 [2024-07-24 19:07:08.748202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.953 [2024-07-24 19:07:08.748213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.953 [2024-07-24 19:07:08.748224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.953 [2024-07-24 19:07:08.748250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.953 qpair failed and we were unable to recover it. 00:30:23.953 [2024-07-24 19:07:08.758256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.953 [2024-07-24 19:07:08.758406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.953 [2024-07-24 19:07:08.758431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.953 [2024-07-24 19:07:08.758448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.953 [2024-07-24 19:07:08.758460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.953 [2024-07-24 19:07:08.758485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.953 qpair failed and we were unable to recover it. 00:30:23.953 [2024-07-24 19:07:08.768147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.768273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.768298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.768310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.768320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.768345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.778164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.778274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.778299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.778311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.778321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.778346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.788145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.788292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.788316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.788329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.788340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.788365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.798345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.798481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.798506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.798518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.798529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.798554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.808229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.808374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.808400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.808412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.808424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.808448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.818253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.818391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.818416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.818428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.818439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.818465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.828270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.828418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.828444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.828456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.828467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.828493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.838497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.838652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.838677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.838690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.838700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.838726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.848375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.848501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.848530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.848542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.848553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.848578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.858358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.858472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.858497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.858509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.858520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.858545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.868418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.868526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.868551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.868563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.868574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.868599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.878723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.878887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.954 [2024-07-24 19:07:08.878913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.954 [2024-07-24 19:07:08.878926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.954 [2024-07-24 19:07:08.878936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.954 [2024-07-24 19:07:08.878962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.954 qpair failed and we were unable to recover it. 00:30:23.954 [2024-07-24 19:07:08.888510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.954 [2024-07-24 19:07:08.888635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.955 [2024-07-24 19:07:08.888659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.955 [2024-07-24 19:07:08.888672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.955 [2024-07-24 19:07:08.888682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.955 [2024-07-24 19:07:08.888712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.955 qpair failed and we were unable to recover it. 00:30:23.955 [2024-07-24 19:07:08.898558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.955 [2024-07-24 19:07:08.898686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.955 [2024-07-24 19:07:08.898711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.955 [2024-07-24 19:07:08.898723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.955 [2024-07-24 19:07:08.898734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.955 [2024-07-24 19:07:08.898759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.955 qpair failed and we were unable to recover it. 00:30:23.955 [2024-07-24 19:07:08.908623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.955 [2024-07-24 19:07:08.908735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.955 [2024-07-24 19:07:08.908762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.955 [2024-07-24 19:07:08.908774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.955 [2024-07-24 19:07:08.908785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.955 [2024-07-24 19:07:08.908811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.955 qpair failed and we were unable to recover it. 00:30:23.955 [2024-07-24 19:07:08.918825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.955 [2024-07-24 19:07:08.918966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.955 [2024-07-24 19:07:08.918992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.955 [2024-07-24 19:07:08.919004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.955 [2024-07-24 19:07:08.919015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.955 [2024-07-24 19:07:08.919040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.955 qpair failed and we were unable to recover it. 00:30:23.955 [2024-07-24 19:07:08.928629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.955 [2024-07-24 19:07:08.928743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.955 [2024-07-24 19:07:08.928768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.955 [2024-07-24 19:07:08.928780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.955 [2024-07-24 19:07:08.928791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.955 [2024-07-24 19:07:08.928816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.955 qpair failed and we were unable to recover it. 00:30:23.955 [2024-07-24 19:07:08.938610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.955 [2024-07-24 19:07:08.938724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.955 [2024-07-24 19:07:08.938754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.955 [2024-07-24 19:07:08.938766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.955 [2024-07-24 19:07:08.938777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.955 [2024-07-24 19:07:08.938802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.955 qpair failed and we were unable to recover it. 00:30:23.955 [2024-07-24 19:07:08.948679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.955 [2024-07-24 19:07:08.948827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.955 [2024-07-24 19:07:08.948852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.955 [2024-07-24 19:07:08.948864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.955 [2024-07-24 19:07:08.948875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.955 [2024-07-24 19:07:08.948901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.955 qpair failed and we were unable to recover it. 00:30:23.955 [2024-07-24 19:07:08.958940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.955 [2024-07-24 19:07:08.959091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.955 [2024-07-24 19:07:08.959116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.955 [2024-07-24 19:07:08.959128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.955 [2024-07-24 19:07:08.959139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:23.955 [2024-07-24 19:07:08.959165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.955 qpair failed and we were unable to recover it. 00:30:24.215 [2024-07-24 19:07:08.968759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.215 [2024-07-24 19:07:08.968871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.215 [2024-07-24 19:07:08.968896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.215 [2024-07-24 19:07:08.968908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.215 [2024-07-24 19:07:08.968919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.215 [2024-07-24 19:07:08.968945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.215 qpair failed and we were unable to recover it. 00:30:24.215 [2024-07-24 19:07:08.978850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.215 [2024-07-24 19:07:08.978996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.215 [2024-07-24 19:07:08.979020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.215 [2024-07-24 19:07:08.979032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.215 [2024-07-24 19:07:08.979048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.215 [2024-07-24 19:07:08.979073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.215 qpair failed and we were unable to recover it. 00:30:24.215 [2024-07-24 19:07:08.988902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.215 [2024-07-24 19:07:08.989069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.215 [2024-07-24 19:07:08.989093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.215 [2024-07-24 19:07:08.989105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.215 [2024-07-24 19:07:08.989116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.215 [2024-07-24 19:07:08.989140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.215 qpair failed and we were unable to recover it. 00:30:24.215 [2024-07-24 19:07:08.999089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.215 [2024-07-24 19:07:08.999260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.215 [2024-07-24 19:07:08.999285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.215 [2024-07-24 19:07:08.999296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.215 [2024-07-24 19:07:08.999307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.215 [2024-07-24 19:07:08.999333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.215 qpair failed and we were unable to recover it. 00:30:24.215 [2024-07-24 19:07:09.008918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.215 [2024-07-24 19:07:09.009084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.215 [2024-07-24 19:07:09.009110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.215 [2024-07-24 19:07:09.009122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.215 [2024-07-24 19:07:09.009133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.215 [2024-07-24 19:07:09.009158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.215 qpair failed and we were unable to recover it. 00:30:24.215 [2024-07-24 19:07:09.018908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.215 [2024-07-24 19:07:09.019031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.215 [2024-07-24 19:07:09.019055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.215 [2024-07-24 19:07:09.019067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.215 [2024-07-24 19:07:09.019078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.215 [2024-07-24 19:07:09.019103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.215 qpair failed and we were unable to recover it. 00:30:24.215 [2024-07-24 19:07:09.028970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.215 [2024-07-24 19:07:09.029085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.215 [2024-07-24 19:07:09.029111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.215 [2024-07-24 19:07:09.029124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.215 [2024-07-24 19:07:09.029135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.215 [2024-07-24 19:07:09.029160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.215 qpair failed and we were unable to recover it. 00:30:24.215 [2024-07-24 19:07:09.039221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.215 [2024-07-24 19:07:09.039420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.215 [2024-07-24 19:07:09.039447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.215 [2024-07-24 19:07:09.039460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.215 [2024-07-24 19:07:09.039471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.215 [2024-07-24 19:07:09.039497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.215 qpair failed and we were unable to recover it. 00:30:24.215 [2024-07-24 19:07:09.049074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.215 [2024-07-24 19:07:09.049199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.215 [2024-07-24 19:07:09.049224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.215 [2024-07-24 19:07:09.049237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.215 [2024-07-24 19:07:09.049248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.215 [2024-07-24 19:07:09.049272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.215 qpair failed and we were unable to recover it. 00:30:24.215 [2024-07-24 19:07:09.059114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.059257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.059282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.059294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.059306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.059330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.069081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.069191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.069216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.069228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.069244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.069268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.079300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.079462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.079487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.079499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.079510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.079536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.089123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.089239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.089263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.089275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.089285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.089310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.099152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.099298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.099323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.099334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.099345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.099371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.109188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.109296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.109322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.109334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.109345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.109370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.119425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.119575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.119600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.119620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.119631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.119655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.129252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.129368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.129393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.129405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.129416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.129441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.139291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.139405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.139432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.139445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.139456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.139481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.149314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.149454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.149480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.149493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.149504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.149529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.159578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.159733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.159761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.159779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.159791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.159817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.169426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.169572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.169598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.169621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.169632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.169657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.179395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.179518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.216 [2024-07-24 19:07:09.179543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.216 [2024-07-24 19:07:09.179556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.216 [2024-07-24 19:07:09.179566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.216 [2024-07-24 19:07:09.179592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.216 qpair failed and we were unable to recover it. 00:30:24.216 [2024-07-24 19:07:09.189518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.216 [2024-07-24 19:07:09.189642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.217 [2024-07-24 19:07:09.189667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.217 [2024-07-24 19:07:09.189679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.217 [2024-07-24 19:07:09.189691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.217 [2024-07-24 19:07:09.189716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.217 qpair failed and we were unable to recover it. 00:30:24.217 [2024-07-24 19:07:09.199706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.217 [2024-07-24 19:07:09.199846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.217 [2024-07-24 19:07:09.199870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.217 [2024-07-24 19:07:09.199882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.217 [2024-07-24 19:07:09.199892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.217 [2024-07-24 19:07:09.199917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.217 qpair failed and we were unable to recover it. 00:30:24.217 [2024-07-24 19:07:09.209543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.217 [2024-07-24 19:07:09.209690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.217 [2024-07-24 19:07:09.209716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.217 [2024-07-24 19:07:09.209728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.217 [2024-07-24 19:07:09.209738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.217 [2024-07-24 19:07:09.209765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.217 qpair failed and we were unable to recover it. 00:30:24.217 [2024-07-24 19:07:09.219573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.217 [2024-07-24 19:07:09.219689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.217 [2024-07-24 19:07:09.219714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.217 [2024-07-24 19:07:09.219727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.217 [2024-07-24 19:07:09.219737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.217 [2024-07-24 19:07:09.219763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.217 qpair failed and we were unable to recover it. 00:30:24.476 [2024-07-24 19:07:09.229662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.476 [2024-07-24 19:07:09.229774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.476 [2024-07-24 19:07:09.229799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.476 [2024-07-24 19:07:09.229811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.476 [2024-07-24 19:07:09.229822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.476 [2024-07-24 19:07:09.229847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.476 qpair failed and we were unable to recover it. 00:30:24.476 [2024-07-24 19:07:09.239887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.476 [2024-07-24 19:07:09.240030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.476 [2024-07-24 19:07:09.240054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.476 [2024-07-24 19:07:09.240066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.476 [2024-07-24 19:07:09.240077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.476 [2024-07-24 19:07:09.240103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.476 qpair failed and we were unable to recover it. 00:30:24.476 [2024-07-24 19:07:09.249667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.476 [2024-07-24 19:07:09.249788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.476 [2024-07-24 19:07:09.249817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.476 [2024-07-24 19:07:09.249830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.476 [2024-07-24 19:07:09.249841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.476 [2024-07-24 19:07:09.249866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.476 qpair failed and we were unable to recover it. 00:30:24.476 [2024-07-24 19:07:09.259693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.476 [2024-07-24 19:07:09.259811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.476 [2024-07-24 19:07:09.259837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.476 [2024-07-24 19:07:09.259850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.476 [2024-07-24 19:07:09.259861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.476 [2024-07-24 19:07:09.259886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.476 qpair failed and we were unable to recover it. 00:30:24.476 [2024-07-24 19:07:09.269758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.476 [2024-07-24 19:07:09.269871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.476 [2024-07-24 19:07:09.269896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.476 [2024-07-24 19:07:09.269909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.476 [2024-07-24 19:07:09.269919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.476 [2024-07-24 19:07:09.269944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.476 qpair failed and we were unable to recover it. 00:30:24.476 [2024-07-24 19:07:09.279973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.476 [2024-07-24 19:07:09.280145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.476 [2024-07-24 19:07:09.280168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.476 [2024-07-24 19:07:09.280180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.476 [2024-07-24 19:07:09.280191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.476 [2024-07-24 19:07:09.280216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.476 qpair failed and we were unable to recover it. 00:30:24.476 [2024-07-24 19:07:09.289758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.476 [2024-07-24 19:07:09.289872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.476 [2024-07-24 19:07:09.289897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.476 [2024-07-24 19:07:09.289908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.476 [2024-07-24 19:07:09.289920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.476 [2024-07-24 19:07:09.289949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.476 qpair failed and we were unable to recover it. 00:30:24.476 [2024-07-24 19:07:09.299839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.476 [2024-07-24 19:07:09.300026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.476 [2024-07-24 19:07:09.300050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.476 [2024-07-24 19:07:09.300063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.476 [2024-07-24 19:07:09.300074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.476 [2024-07-24 19:07:09.300100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.476 qpair failed and we were unable to recover it. 00:30:24.476 [2024-07-24 19:07:09.309831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.477 [2024-07-24 19:07:09.309950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.477 [2024-07-24 19:07:09.309975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.477 [2024-07-24 19:07:09.309987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.477 [2024-07-24 19:07:09.309998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.477 [2024-07-24 19:07:09.310023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.477 qpair failed and we were unable to recover it. 00:30:24.477 [2024-07-24 19:07:09.320065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.477 [2024-07-24 19:07:09.320225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.477 [2024-07-24 19:07:09.320251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.477 [2024-07-24 19:07:09.320262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.477 [2024-07-24 19:07:09.320273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.477 [2024-07-24 19:07:09.320299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.477 qpair failed and we were unable to recover it. 00:30:24.477 [2024-07-24 19:07:09.329924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.477 [2024-07-24 19:07:09.330067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.477 [2024-07-24 19:07:09.330091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.477 [2024-07-24 19:07:09.330104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.477 [2024-07-24 19:07:09.330114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.477 [2024-07-24 19:07:09.330140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.477 qpair failed and we were unable to recover it. 00:30:24.477 [2024-07-24 19:07:09.339939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.477 [2024-07-24 19:07:09.340095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.477 [2024-07-24 19:07:09.340124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.477 [2024-07-24 19:07:09.340136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.477 [2024-07-24 19:07:09.340147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.477 [2024-07-24 19:07:09.340172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.477 qpair failed and we were unable to recover it. 00:30:24.477 [2024-07-24 19:07:09.350009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.477 [2024-07-24 19:07:09.350122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.477 [2024-07-24 19:07:09.350147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.477 [2024-07-24 19:07:09.350159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.477 [2024-07-24 19:07:09.350170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.477 [2024-07-24 19:07:09.350195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.477 qpair failed and we were unable to recover it. 00:30:24.477 [2024-07-24 19:07:09.360202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.477 [2024-07-24 19:07:09.360338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.477 [2024-07-24 19:07:09.360363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.477 [2024-07-24 19:07:09.360375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.477 [2024-07-24 19:07:09.360386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.477 [2024-07-24 19:07:09.360412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.477 qpair failed and we were unable to recover it. 00:30:24.477 [2024-07-24 19:07:09.370064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.477 [2024-07-24 19:07:09.370212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.477 [2024-07-24 19:07:09.370236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.477 [2024-07-24 19:07:09.370249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.477 [2024-07-24 19:07:09.370260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e8000b90 00:30:24.477 [2024-07-24 19:07:09.370284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.477 qpair failed and we were unable to recover it. 00:30:24.477 [2024-07-24 19:07:09.380075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.477 [2024-07-24 19:07:09.380249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.477 [2024-07-24 19:07:09.380307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.477 [2024-07-24 19:07:09.380333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.477 [2024-07-24 19:07:09.380352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e0000b90 00:30:24.477 [2024-07-24 19:07:09.380408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.477 qpair failed and we were unable to recover it. 00:30:24.477 [2024-07-24 19:07:09.390142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.477 [2024-07-24 19:07:09.390323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.477 [2024-07-24 19:07:09.390368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.477 [2024-07-24 19:07:09.390390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.477 [2024-07-24 19:07:09.390409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe5e0000b90 00:30:24.477 [2024-07-24 19:07:09.390451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.477 qpair failed and we were unable to recover it. 00:30:24.477 [2024-07-24 19:07:09.390639] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:24.477 A controller has encountered a failure and is being reset. 00:30:24.736 Controller properly reset. 00:30:24.736 Initializing NVMe Controllers 00:30:24.736 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:24.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:24.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:24.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:24.736 Initialization complete. Launching workers. 00:30:24.736 Starting thread on core 1 00:30:24.736 Starting thread on core 2 00:30:24.736 Starting thread on core 3 00:30:24.736 Starting thread on core 0 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:24.736 00:30:24.736 real 0m11.714s 00:30:24.736 user 0m21.833s 00:30:24.736 sys 0m4.261s 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.736 ************************************ 00:30:24.736 END TEST nvmf_target_disconnect_tc2 00:30:24.736 ************************************ 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:24.736 rmmod nvme_tcp 00:30:24.736 rmmod nvme_fabrics 00:30:24.736 rmmod nvme_keyring 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2692019 ']' 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2692019 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2692019 ']' 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2692019 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:24.736 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2692019 00:30:24.996 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:24.996 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:24.996 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2692019' 00:30:24.996 killing process with pid 2692019 00:30:24.996 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2692019 00:30:24.996 19:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2692019 00:30:25.254 19:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:25.254 19:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:25.254 19:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:25.254 19:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:25.255 19:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:25.255 19:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.255 19:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.255 19:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.159 19:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:27.159 00:30:27.159 real 0m20.490s 00:30:27.159 user 0m49.851s 00:30:27.159 sys 0m9.219s 00:30:27.159 19:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:27.159 19:07:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:27.159 ************************************ 00:30:27.159 END TEST nvmf_target_disconnect 00:30:27.159 ************************************ 00:30:27.418 19:07:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:27.418 00:30:27.418 real 6m14.702s 00:30:27.418 user 11m58.338s 00:30:27.418 sys 1m55.696s 00:30:27.418 19:07:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:27.418 19:07:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.418 ************************************ 00:30:27.418 END TEST nvmf_host 00:30:27.418 ************************************ 00:30:27.418 00:30:27.418 real 23m39.404s 00:30:27.418 user 51m23.283s 00:30:27.418 sys 6m55.839s 00:30:27.418 19:07:12 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:27.418 19:07:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.418 ************************************ 00:30:27.418 END TEST nvmf_tcp 00:30:27.418 ************************************ 00:30:27.418 19:07:12 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:27.418 19:07:12 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:27.418 19:07:12 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:27.418 19:07:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.418 19:07:12 -- common/autotest_common.sh@10 -- # set +x 00:30:27.418 ************************************ 00:30:27.418 START TEST spdkcli_nvmf_tcp 00:30:27.418 ************************************ 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:27.418 * Looking for test storage... 00:30:27.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.418 19:07:12 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:27.419 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.678 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:27.678 19:07:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2693745 00:30:27.678 19:07:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2693745 00:30:27.678 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2693745 ']' 00:30:27.678 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.678 19:07:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:27.678 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:27.678 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.678 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:27.678 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.678 [2024-07-24 19:07:12.477287] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:30:27.678 [2024-07-24 19:07:12.477348] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693745 ] 00:30:27.678 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.678 [2024-07-24 19:07:12.558993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:27.678 [2024-07-24 19:07:12.651383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.678 [2024-07-24 19:07:12.651389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.937 19:07:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:27.937 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:27.937 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:27.937 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:27.937 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:27.937 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:27.937 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:27.937 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:27.937 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:27.937 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:27.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:27.937 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:27.937 ' 00:30:30.472 [2024-07-24 19:07:15.453085] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.847 [2024-07-24 19:07:16.737643] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:34.379 [2024-07-24 19:07:19.117463] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:36.280 [2024-07-24 19:07:21.180367] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:38.183 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:38.183 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:38.183 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:38.183 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:38.183 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:38.183 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:38.183 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:38.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:38.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:38.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:38.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:38.183 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:38.183 19:07:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:38.183 19:07:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:38.183 19:07:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:38.183 19:07:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:38.183 19:07:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:38.183 19:07:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:38.183 19:07:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:38.183 19:07:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:38.442 19:07:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:38.443 19:07:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:38.443 19:07:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:38.443 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:38.443 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:38.443 19:07:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:38.443 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:38.443 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:38.443 19:07:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:38.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:38.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:38.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:38.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:38.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:38.443 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:38.443 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:38.443 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:38.443 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:38.443 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:38.443 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:38.443 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:38.443 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:38.443 ' 00:30:45.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:45.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:45.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:45.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:45.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:45.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:45.015 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:45.015 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:45.015 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:45.015 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:45.015 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:45.016 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:45.016 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:45.016 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:45.016 19:07:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:45.016 19:07:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:45.016 19:07:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2693745 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2693745 ']' 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2693745 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2693745 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2693745' 00:30:45.016 killing process with pid 2693745 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2693745 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2693745 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2693745 ']' 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2693745 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2693745 ']' 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2693745 00:30:45.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2693745) - No such process 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2693745 is not found' 00:30:45.016 Process with pid 2693745 is not found 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:45.016 00:30:45.016 real 0m16.957s 00:30:45.016 user 0m37.110s 00:30:45.016 sys 0m0.923s 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:45.016 19:07:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:45.016 ************************************ 00:30:45.016 END TEST spdkcli_nvmf_tcp 00:30:45.016 ************************************ 00:30:45.016 19:07:29 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:45.016 19:07:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:45.016 19:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:45.016 19:07:29 -- common/autotest_common.sh@10 -- # set +x 00:30:45.016 ************************************ 00:30:45.016 START TEST nvmf_identify_passthru 00:30:45.016 ************************************ 00:30:45.016 19:07:29 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:45.016 * Looking for test storage... 00:30:45.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:45.016 19:07:29 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.016 19:07:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.016 19:07:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.016 19:07:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.016 19:07:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.016 19:07:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.016 19:07:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.016 19:07:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:45.016 19:07:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:45.016 19:07:29 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.016 19:07:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.016 19:07:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.016 19:07:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.016 19:07:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.016 19:07:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.016 19:07:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.016 19:07:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:45.016 19:07:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.016 19:07:29 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:45.016 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.017 19:07:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:45.017 19:07:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.017 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:45.017 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:45.017 19:07:29 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:45.017 19:07:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.293 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:50.294 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:50.294 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:50.294 Found net devices under 0000:af:00.0: cvl_0_0 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:50.294 Found net devices under 0000:af:00.1: cvl_0_1 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:50.294 19:07:34 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:50.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:30:50.294 00:30:50.294 --- 10.0.0.2 ping statistics --- 00:30:50.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.294 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:30:50.294 00:30:50.294 --- 10.0.0.1 ping statistics --- 00:30:50.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.294 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:50.294 19:07:35 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:50.294 19:07:35 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.294 19:07:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=() 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # local bdfs 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=() 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # local bdfs 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:86:00.0 00:30:50.294 19:07:35 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # echo 0000:86:00.0 00:30:50.294 19:07:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:30:50.294 19:07:35 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:30:50.294 19:07:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:30:50.294 19:07:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:50.295 19:07:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:50.295 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.489 19:07:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:30:54.489 19:07:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:30:54.489 19:07:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:54.489 19:07:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:54.748 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.006 19:07:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:59.006 19:07:43 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:59.006 19:07:43 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:59.006 19:07:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.006 19:07:43 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:59.006 19:07:43 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:59.006 19:07:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.006 19:07:43 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2701475 00:30:59.006 19:07:43 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:59.006 19:07:43 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:59.006 19:07:43 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2701475 00:30:59.006 19:07:43 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2701475 ']' 00:30:59.006 19:07:43 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.006 19:07:43 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:59.006 19:07:43 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.006 19:07:43 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:59.006 19:07:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.006 [2024-07-24 19:07:43.821985] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:30:59.006 [2024-07-24 19:07:43.822096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.006 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.006 [2024-07-24 19:07:43.948665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:59.269 [2024-07-24 19:07:44.041909] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.269 [2024-07-24 19:07:44.041952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.269 [2024-07-24 19:07:44.041962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.269 [2024-07-24 19:07:44.041971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.269 [2024-07-24 19:07:44.041978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.269 [2024-07-24 19:07:44.042029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.269 [2024-07-24 19:07:44.042564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.269 [2024-07-24 19:07:44.042655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:59.269 [2024-07-24 19:07:44.042656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.834 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:59.834 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:59.834 19:07:44 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:59.834 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.834 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.834 INFO: Log level set to 20 00:30:59.834 INFO: Requests: 00:30:59.834 { 00:30:59.834 "jsonrpc": "2.0", 00:30:59.834 "method": "nvmf_set_config", 00:30:59.834 "id": 1, 00:30:59.834 "params": { 00:30:59.834 "admin_cmd_passthru": { 00:30:59.834 "identify_ctrlr": true 00:30:59.834 } 00:30:59.834 } 00:30:59.834 } 00:30:59.834 00:30:59.834 INFO: response: 00:30:59.834 { 00:30:59.834 "jsonrpc": "2.0", 00:30:59.834 "id": 1, 00:30:59.834 "result": true 00:30:59.834 } 00:30:59.834 00:30:59.834 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.834 19:07:44 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:59.834 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.834 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.834 INFO: Setting log level to 20 00:30:59.834 INFO: Setting log level to 20 00:30:59.834 INFO: Log level set to 20 00:30:59.834 INFO: Log level set to 20 00:30:59.834 INFO: Requests: 00:30:59.834 { 00:30:59.834 "jsonrpc": "2.0", 00:30:59.834 "method": "framework_start_init", 00:30:59.834 "id": 1 00:30:59.834 } 00:30:59.834 00:30:59.834 INFO: Requests: 00:30:59.834 { 00:30:59.834 "jsonrpc": "2.0", 00:30:59.834 "method": "framework_start_init", 00:30:59.834 "id": 1 00:30:59.834 } 00:30:59.834 00:30:59.834 [2024-07-24 19:07:44.840660] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:00.093 INFO: response: 00:31:00.093 { 00:31:00.093 "jsonrpc": "2.0", 00:31:00.093 "id": 1, 00:31:00.093 "result": true 00:31:00.093 } 00:31:00.093 00:31:00.093 INFO: response: 00:31:00.093 { 00:31:00.093 "jsonrpc": "2.0", 00:31:00.093 "id": 1, 00:31:00.093 "result": true 00:31:00.093 } 00:31:00.093 00:31:00.093 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.093 19:07:44 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:00.094 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.094 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:00.094 INFO: Setting log level to 40 00:31:00.094 INFO: Setting log level to 40 00:31:00.094 INFO: Setting log level to 40 00:31:00.094 [2024-07-24 19:07:44.854706] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.094 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.094 19:07:44 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:00.094 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:00.094 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:00.094 19:07:44 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:31:00.094 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.094 19:07:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.383 Nvme0n1 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.383 [2024-07-24 19:07:47.797339] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.383 [ 00:31:03.383 { 00:31:03.383 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:03.383 "subtype": "Discovery", 00:31:03.383 "listen_addresses": [], 00:31:03.383 "allow_any_host": true, 00:31:03.383 "hosts": [] 00:31:03.383 }, 00:31:03.383 { 00:31:03.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.383 "subtype": "NVMe", 00:31:03.383 "listen_addresses": [ 00:31:03.383 { 00:31:03.383 "trtype": "TCP", 00:31:03.383 "adrfam": "IPv4", 00:31:03.383 "traddr": "10.0.0.2", 00:31:03.383 "trsvcid": "4420" 00:31:03.383 } 00:31:03.383 ], 00:31:03.383 "allow_any_host": true, 00:31:03.383 "hosts": [], 00:31:03.383 "serial_number": "SPDK00000000000001", 00:31:03.383 "model_number": "SPDK bdev Controller", 00:31:03.383 "max_namespaces": 1, 00:31:03.383 "min_cntlid": 1, 00:31:03.383 "max_cntlid": 65519, 00:31:03.383 "namespaces": [ 00:31:03.383 { 00:31:03.383 "nsid": 1, 00:31:03.383 "bdev_name": "Nvme0n1", 00:31:03.383 "name": "Nvme0n1", 00:31:03.383 "nguid": "EF7BC0CCAE924B93BE9C34524C9327BA", 00:31:03.383 "uuid": "ef7bc0cc-ae92-4b93-be9c-34524c9327ba" 00:31:03.383 } 00:31:03.383 ] 00:31:03.383 } 00:31:03.383 ] 00:31:03.383 19:07:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:03.383 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:03.383 19:07:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:03.383 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.383 19:07:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:03.383 19:07:48 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:31:03.383 19:07:48 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:03.383 19:07:48 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.383 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.383 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.383 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.383 19:07:48 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:03.383 19:07:48 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:03.383 rmmod nvme_tcp 00:31:03.383 rmmod nvme_fabrics 00:31:03.383 rmmod nvme_keyring 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2701475 ']' 00:31:03.383 19:07:48 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2701475 00:31:03.383 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2701475 ']' 00:31:03.383 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2701475 00:31:03.383 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:31:03.383 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:03.383 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2701475 00:31:03.642 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:03.642 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:03.642 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2701475' 00:31:03.642 killing process with pid 2701475 00:31:03.642 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2701475 00:31:03.642 19:07:48 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2701475 00:31:05.017 19:07:49 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:05.018 19:07:49 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:05.018 19:07:49 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:05.018 19:07:49 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:05.018 19:07:49 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:05.018 19:07:49 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.018 19:07:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:05.018 19:07:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.550 19:07:52 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:07.550 00:31:07.550 real 0m22.704s 00:31:07.550 user 0m31.097s 00:31:07.550 sys 0m5.385s 00:31:07.550 19:07:52 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:07.550 19:07:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:07.550 ************************************ 00:31:07.550 END TEST nvmf_identify_passthru 00:31:07.550 ************************************ 00:31:07.550 19:07:52 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:07.550 19:07:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:07.550 19:07:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:07.550 19:07:52 -- common/autotest_common.sh@10 -- # set +x 00:31:07.550 ************************************ 00:31:07.550 START TEST nvmf_dif 00:31:07.550 ************************************ 00:31:07.550 19:07:52 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:07.550 * Looking for test storage... 00:31:07.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:07.550 19:07:52 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.550 19:07:52 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.550 19:07:52 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.550 19:07:52 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.550 19:07:52 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.551 19:07:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.551 19:07:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.551 19:07:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.551 19:07:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:07.551 19:07:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:07.551 19:07:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:07.551 19:07:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:07.551 19:07:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:07.551 19:07:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:07.551 19:07:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.551 19:07:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:07.551 19:07:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:07.551 19:07:52 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:07.551 19:07:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:12.821 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:12.821 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.821 19:07:57 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:12.822 Found net devices under 0000:af:00.0: cvl_0_0 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:12.822 Found net devices under 0000:af:00.1: cvl_0_1 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:12.822 19:07:57 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.082 19:07:57 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.082 19:07:57 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.082 19:07:57 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:31:13.082 00:31:13.082 --- 10.0.0.2 ping statistics --- 00:31:13.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.082 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:31:13.082 19:07:57 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:31:13.082 00:31:13.082 --- 10.0.0.1 ping statistics --- 00:31:13.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.082 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:31:13.082 19:07:57 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.082 19:07:57 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:13.082 19:07:57 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:13.082 19:07:57 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:15.619 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:15.619 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:15.619 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:15.878 19:08:00 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.878 19:08:00 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:15.878 19:08:00 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:15.878 19:08:00 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.878 19:08:00 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:15.878 19:08:00 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:15.878 19:08:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:15.878 19:08:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:15.878 19:08:00 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:15.878 19:08:00 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:15.878 19:08:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:15.878 19:08:00 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2707303 00:31:15.878 19:08:00 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2707303 00:31:15.878 19:08:00 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:15.878 19:08:00 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2707303 ']' 00:31:15.878 19:08:00 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.878 19:08:00 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:15.878 19:08:00 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.878 19:08:00 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:15.878 19:08:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:15.878 [2024-07-24 19:08:00.850136] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:31:15.878 [2024-07-24 19:08:00.850200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.137 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.137 [2024-07-24 19:08:00.936883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.137 [2024-07-24 19:08:01.025873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.137 [2024-07-24 19:08:01.025915] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.137 [2024-07-24 19:08:01.025926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.137 [2024-07-24 19:08:01.025935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.137 [2024-07-24 19:08:01.025942] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.137 [2024-07-24 19:08:01.025963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.073 19:08:02 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:17.073 19:08:02 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:31:17.073 19:08:02 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:17.073 19:08:02 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:17.073 19:08:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:17.073 19:08:02 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.073 19:08:02 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:17.073 19:08:02 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:17.073 19:08:02 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.073 19:08:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:17.332 [2024-07-24 19:08:02.083548] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.332 19:08:02 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.332 19:08:02 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:17.332 19:08:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:17.332 19:08:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.332 19:08:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:17.332 ************************************ 00:31:17.332 START TEST fio_dif_1_default 00:31:17.332 ************************************ 00:31:17.332 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:31:17.332 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:17.332 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:17.332 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.332 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:17.332 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:17.333 bdev_null0 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:17.333 [2024-07-24 19:08:02.155865] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:17.333 { 00:31:17.333 "params": { 00:31:17.333 "name": "Nvme$subsystem", 00:31:17.333 "trtype": "$TEST_TRANSPORT", 00:31:17.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.333 "adrfam": "ipv4", 00:31:17.333 "trsvcid": "$NVMF_PORT", 00:31:17.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.333 "hdgst": ${hdgst:-false}, 00:31:17.333 "ddgst": ${ddgst:-false} 00:31:17.333 }, 00:31:17.333 "method": "bdev_nvme_attach_controller" 00:31:17.333 } 00:31:17.333 EOF 00:31:17.333 )") 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local sanitizers 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # shift 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local asan_lib= 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libasan 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:17.333 "params": { 00:31:17.333 "name": "Nvme0", 00:31:17.333 "trtype": "tcp", 00:31:17.333 "traddr": "10.0.0.2", 00:31:17.333 "adrfam": "ipv4", 00:31:17.333 "trsvcid": "4420", 00:31:17.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:17.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:17.333 "hdgst": false, 00:31:17.333 "ddgst": false 00:31:17.333 }, 00:31:17.333 "method": "bdev_nvme_attach_controller" 00:31:17.333 }' 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:17.333 19:08:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.941 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:17.941 fio-3.35 00:31:17.941 Starting 1 thread 00:31:17.941 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.148 00:31:30.148 filename0: (groupid=0, jobs=1): err= 0: pid=2707750: Wed Jul 24 19:08:13 2024 00:31:30.148 read: IOPS=189, BW=759KiB/s (778kB/s)(7600KiB/10009msec) 00:31:30.148 slat (nsec): min=9124, max=31782, avg=9469.56, stdev=1053.27 00:31:30.148 clat (usec): min=728, max=42387, avg=21044.15, stdev=20240.65 00:31:30.148 lat (usec): min=737, max=42413, avg=21053.61, stdev=20240.63 00:31:30.148 clat percentiles (usec): 00:31:30.148 | 1.00th=[ 734], 5.00th=[ 742], 10.00th=[ 742], 20.00th=[ 758], 00:31:30.148 | 30.00th=[ 766], 40.00th=[ 775], 50.00th=[41157], 60.00th=[41157], 00:31:30.148 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:30.148 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:30.148 | 99.99th=[42206] 00:31:30.148 bw ( KiB/s): min= 704, max= 768, per=99.83%, avg=758.40, stdev=23.45, samples=20 00:31:30.148 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:31:30.148 lat (usec) : 750=14.95%, 1000=34.95% 00:31:30.148 lat (msec) : 50=50.11% 00:31:30.148 cpu : usr=94.28%, sys=5.42%, ctx=9, majf=0, minf=218 00:31:30.148 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.148 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.148 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:30.148 00:31:30.148 Run status group 0 (all jobs): 00:31:30.148 READ: bw=759KiB/s (778kB/s), 759KiB/s-759KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10009-10009msec 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.148 00:31:30.148 real 0m11.390s 00:31:30.148 user 0m20.699s 00:31:30.148 sys 0m0.870s 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:30.148 19:08:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:30.148 ************************************ 00:31:30.148 END TEST fio_dif_1_default 00:31:30.148 ************************************ 00:31:30.148 19:08:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:30.148 19:08:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:30.148 19:08:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.148 19:08:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:30.148 ************************************ 00:31:30.148 START TEST fio_dif_1_multi_subsystems 00:31:30.148 ************************************ 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.149 bdev_null0 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.149 [2024-07-24 19:08:13.623461] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.149 bdev_null1 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.149 { 00:31:30.149 "params": { 00:31:30.149 "name": "Nvme$subsystem", 00:31:30.149 "trtype": "$TEST_TRANSPORT", 00:31:30.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.149 "adrfam": "ipv4", 00:31:30.149 "trsvcid": "$NVMF_PORT", 00:31:30.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.149 "hdgst": ${hdgst:-false}, 00:31:30.149 "ddgst": ${ddgst:-false} 00:31:30.149 }, 00:31:30.149 "method": "bdev_nvme_attach_controller" 00:31:30.149 } 00:31:30.149 EOF 00:31:30.149 )") 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local sanitizers 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # shift 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local asan_lib= 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libasan 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.149 { 00:31:30.149 "params": { 00:31:30.149 "name": "Nvme$subsystem", 00:31:30.149 "trtype": "$TEST_TRANSPORT", 00:31:30.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.149 "adrfam": "ipv4", 00:31:30.149 "trsvcid": "$NVMF_PORT", 00:31:30.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.149 "hdgst": ${hdgst:-false}, 00:31:30.149 "ddgst": ${ddgst:-false} 00:31:30.149 }, 00:31:30.149 "method": "bdev_nvme_attach_controller" 00:31:30.149 } 00:31:30.149 EOF 00:31:30.149 )") 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:30.149 "params": { 00:31:30.149 "name": "Nvme0", 00:31:30.149 "trtype": "tcp", 00:31:30.149 "traddr": "10.0.0.2", 00:31:30.149 "adrfam": "ipv4", 00:31:30.149 "trsvcid": "4420", 00:31:30.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.149 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.149 "hdgst": false, 00:31:30.149 "ddgst": false 00:31:30.149 }, 00:31:30.149 "method": "bdev_nvme_attach_controller" 00:31:30.149 },{ 00:31:30.149 "params": { 00:31:30.149 "name": "Nvme1", 00:31:30.149 "trtype": "tcp", 00:31:30.149 "traddr": "10.0.0.2", 00:31:30.149 "adrfam": "ipv4", 00:31:30.149 "trsvcid": "4420", 00:31:30.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.149 "hdgst": false, 00:31:30.149 "ddgst": false 00:31:30.149 }, 00:31:30.149 "method": "bdev_nvme_attach_controller" 00:31:30.149 }' 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:31:30.149 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:31:30.150 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.150 19:08:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.150 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:30.150 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:30.150 fio-3.35 00:31:30.150 Starting 2 threads 00:31:30.150 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.355 00:31:42.355 filename0: (groupid=0, jobs=1): err= 0: pid=2709978: Wed Jul 24 19:08:25 2024 00:31:42.355 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10014msec) 00:31:42.355 slat (nsec): min=9181, max=48314, avg=11662.76, stdev=3587.42 00:31:42.355 clat (usec): min=40872, max=42995, avg=41861.86, stdev=350.75 00:31:42.355 lat (usec): min=40882, max=43011, avg=41873.52, stdev=351.07 00:31:42.355 clat percentiles (usec): 00:31:42.355 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:31:42.355 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:42.355 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:42.355 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:42.355 | 99.99th=[43254] 00:31:42.355 bw ( KiB/s): min= 352, max= 384, per=33.64%, avg=380.80, stdev= 9.85, samples=20 00:31:42.355 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:42.355 lat (msec) : 50=100.00% 00:31:42.355 cpu : usr=97.48%, sys=2.20%, ctx=12, majf=0, minf=173 00:31:42.355 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.355 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.355 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:42.355 filename1: (groupid=0, jobs=1): err= 0: pid=2709979: Wed Jul 24 19:08:25 2024 00:31:42.355 read: IOPS=187, BW=748KiB/s (766kB/s)(7504KiB/10029msec) 00:31:42.355 slat (nsec): min=9185, max=29559, avg=10693.09, stdev=2517.89 00:31:42.355 clat (usec): min=728, max=42937, avg=21352.03, stdev=20512.61 00:31:42.355 lat (usec): min=737, max=42950, avg=21362.72, stdev=20511.86 00:31:42.355 clat percentiles (usec): 00:31:42.355 | 1.00th=[ 734], 5.00th=[ 750], 10.00th=[ 750], 20.00th=[ 766], 00:31:42.355 | 30.00th=[ 775], 40.00th=[ 914], 50.00th=[41157], 60.00th=[41157], 00:31:42.355 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:42.355 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:31:42.355 | 99.99th=[42730] 00:31:42.355 bw ( KiB/s): min= 704, max= 768, per=66.22%, avg=748.80, stdev=30.09, samples=20 00:31:42.355 iops : min= 176, max= 192, avg=187.20, stdev= 7.52, samples=20 00:31:42.355 lat (usec) : 750=8.42%, 1000=40.83% 00:31:42.355 lat (msec) : 2=0.64%, 50=50.11% 00:31:42.355 cpu : usr=97.17%, sys=2.51%, ctx=12, majf=0, minf=94 00:31:42.355 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.355 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.355 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:42.355 00:31:42.355 Run status group 0 (all jobs): 00:31:42.355 READ: bw=1130KiB/s (1157kB/s), 382KiB/s-748KiB/s (391kB/s-766kB/s), io=11.1MiB (11.6MB), run=10014-10029msec 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.355 00:31:42.355 real 0m11.810s 00:31:42.355 user 0m31.499s 00:31:42.355 sys 0m0.865s 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:42.355 19:08:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 ************************************ 00:31:42.355 END TEST fio_dif_1_multi_subsystems 00:31:42.355 ************************************ 00:31:42.355 19:08:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:42.355 19:08:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:42.355 19:08:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:42.355 19:08:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 ************************************ 00:31:42.355 START TEST fio_dif_rand_params 00:31:42.355 ************************************ 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 bdev_null0 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.355 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 [2024-07-24 19:08:25.500905] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:42.356 { 00:31:42.356 "params": { 00:31:42.356 "name": "Nvme$subsystem", 00:31:42.356 "trtype": "$TEST_TRANSPORT", 00:31:42.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.356 "adrfam": "ipv4", 00:31:42.356 "trsvcid": "$NVMF_PORT", 00:31:42.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.356 "hdgst": ${hdgst:-false}, 00:31:42.356 "ddgst": ${ddgst:-false} 00:31:42.356 }, 00:31:42.356 "method": "bdev_nvme_attach_controller" 00:31:42.356 } 00:31:42.356 EOF 00:31:42.356 )") 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:42.356 "params": { 00:31:42.356 "name": "Nvme0", 00:31:42.356 "trtype": "tcp", 00:31:42.356 "traddr": "10.0.0.2", 00:31:42.356 "adrfam": "ipv4", 00:31:42.356 "trsvcid": "4420", 00:31:42.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.356 "hdgst": false, 00:31:42.356 "ddgst": false 00:31:42.356 }, 00:31:42.356 "method": "bdev_nvme_attach_controller" 00:31:42.356 }' 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:42.356 19:08:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.356 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:42.356 ... 00:31:42.356 fio-3.35 00:31:42.356 Starting 3 threads 00:31:42.356 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.630 00:31:47.630 filename0: (groupid=0, jobs=1): err= 0: pid=2712212: Wed Jul 24 19:08:31 2024 00:31:47.630 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(131MiB/5041msec) 00:31:47.630 slat (nsec): min=9553, max=47242, avg=22083.15, stdev=8953.15 00:31:47.630 clat (usec): min=5315, max=55944, avg=14385.48, stdev=11770.41 00:31:47.630 lat (usec): min=5325, max=55977, avg=14407.56, stdev=11770.93 00:31:47.630 clat percentiles (usec): 00:31:47.630 | 1.00th=[ 5932], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 8848], 00:31:47.630 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[12125], 00:31:47.630 | 70.00th=[13042], 80.00th=[14222], 90.00th=[17433], 95.00th=[51119], 00:31:47.630 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:31:47.630 | 99.99th=[55837] 00:31:47.630 bw ( KiB/s): min=17664, max=32256, per=35.87%, avg=26777.60, stdev=4159.86, samples=10 00:31:47.630 iops : min= 138, max= 252, avg=209.20, stdev=32.50, samples=10 00:31:47.630 lat (msec) : 10=40.80%, 20=50.62%, 50=1.62%, 100=6.96% 00:31:47.630 cpu : usr=94.13%, sys=4.11%, ctx=319, majf=0, minf=110 00:31:47.630 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:47.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.630 issued rwts: total=1049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.630 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:47.630 filename0: (groupid=0, jobs=1): err= 0: pid=2712213: Wed Jul 24 19:08:31 2024 00:31:47.630 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(124MiB/5044msec) 00:31:47.630 slat (nsec): min=9296, max=52088, avg=18594.69, stdev=8974.23 00:31:47.630 clat (usec): min=5710, max=95334, avg=15188.09, stdev=13119.66 00:31:47.630 lat (usec): min=5723, max=95351, avg=15206.69, stdev=13120.32 00:31:47.631 clat percentiles (usec): 00:31:47.631 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 7111], 20.00th=[ 8979], 00:31:47.631 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[11338], 60.00th=[12387], 00:31:47.631 | 70.00th=[13304], 80.00th=[13960], 90.00th=[47973], 95.00th=[52691], 00:31:47.631 | 99.00th=[55313], 99.50th=[56361], 99.90th=[94897], 99.95th=[94897], 00:31:47.631 | 99.99th=[94897] 00:31:47.631 bw ( KiB/s): min=16896, max=32256, per=33.91%, avg=25318.40, stdev=4950.00, samples=10 00:31:47.631 iops : min= 132, max= 252, avg=197.80, stdev=38.67, samples=10 00:31:47.631 lat (msec) : 10=37.50%, 20=52.12%, 50=1.31%, 100=9.07% 00:31:47.631 cpu : usr=96.33%, sys=3.25%, ctx=6, majf=0, minf=73 00:31:47.631 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.631 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.631 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:47.631 filename0: (groupid=0, jobs=1): err= 0: pid=2712214: Wed Jul 24 19:08:31 2024 00:31:47.631 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(113MiB/5033msec) 00:31:47.631 slat (nsec): min=9274, max=73056, avg=17958.11, stdev=8861.82 00:31:47.631 clat (usec): min=5332, max=58340, avg=16733.91, stdev=14471.49 00:31:47.631 lat (usec): min=5343, max=58351, avg=16751.87, stdev=14472.14 00:31:47.631 clat percentiles (usec): 00:31:47.631 | 1.00th=[ 6063], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 8979], 00:31:47.631 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11994], 60.00th=[13566], 00:31:47.631 | 70.00th=[14615], 80.00th=[15664], 90.00th=[51643], 95.00th=[55313], 00:31:47.631 | 99.00th=[57410], 99.50th=[57410], 99.90th=[58459], 99.95th=[58459], 00:31:47.631 | 99.99th=[58459] 00:31:47.631 bw ( KiB/s): min= 9472, max=36096, per=30.80%, avg=22993.60, stdev=7507.01, samples=10 00:31:47.631 iops : min= 74, max= 282, avg=179.60, stdev=58.64, samples=10 00:31:47.631 lat (msec) : 10=32.19%, 20=55.16%, 50=1.33%, 100=11.32% 00:31:47.631 cpu : usr=96.62%, sys=2.96%, ctx=14, majf=0, minf=140 00:31:47.631 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.631 issued rwts: total=901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.631 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:47.631 00:31:47.631 Run status group 0 (all jobs): 00:31:47.631 READ: bw=72.9MiB/s (76.5MB/s), 22.4MiB/s-26.0MiB/s (23.5MB/s-27.3MB/s), io=368MiB (386MB), run=5033-5044msec 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 bdev_null0 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 [2024-07-24 19:08:31.938284] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 bdev_null1 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 bdev_null2 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:47.631 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:47.632 { 00:31:47.632 "params": { 00:31:47.632 "name": "Nvme$subsystem", 00:31:47.632 "trtype": "$TEST_TRANSPORT", 00:31:47.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.632 "adrfam": "ipv4", 00:31:47.632 "trsvcid": "$NVMF_PORT", 00:31:47.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.632 "hdgst": ${hdgst:-false}, 00:31:47.632 "ddgst": ${ddgst:-false} 00:31:47.632 }, 00:31:47.632 "method": "bdev_nvme_attach_controller" 00:31:47.632 } 00:31:47.632 EOF 00:31:47.632 )") 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:47.632 { 00:31:47.632 "params": { 00:31:47.632 "name": "Nvme$subsystem", 00:31:47.632 "trtype": "$TEST_TRANSPORT", 00:31:47.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.632 "adrfam": "ipv4", 00:31:47.632 "trsvcid": "$NVMF_PORT", 00:31:47.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.632 "hdgst": ${hdgst:-false}, 00:31:47.632 "ddgst": ${ddgst:-false} 00:31:47.632 }, 00:31:47.632 "method": "bdev_nvme_attach_controller" 00:31:47.632 } 00:31:47.632 EOF 00:31:47.632 )") 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:47.632 { 00:31:47.632 "params": { 00:31:47.632 "name": "Nvme$subsystem", 00:31:47.632 "trtype": "$TEST_TRANSPORT", 00:31:47.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.632 "adrfam": "ipv4", 00:31:47.632 "trsvcid": "$NVMF_PORT", 00:31:47.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.632 "hdgst": ${hdgst:-false}, 00:31:47.632 "ddgst": ${ddgst:-false} 00:31:47.632 }, 00:31:47.632 "method": "bdev_nvme_attach_controller" 00:31:47.632 } 00:31:47.632 EOF 00:31:47.632 )") 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:47.632 "params": { 00:31:47.632 "name": "Nvme0", 00:31:47.632 "trtype": "tcp", 00:31:47.632 "traddr": "10.0.0.2", 00:31:47.632 "adrfam": "ipv4", 00:31:47.632 "trsvcid": "4420", 00:31:47.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:47.632 "hdgst": false, 00:31:47.632 "ddgst": false 00:31:47.632 }, 00:31:47.632 "method": "bdev_nvme_attach_controller" 00:31:47.632 },{ 00:31:47.632 "params": { 00:31:47.632 "name": "Nvme1", 00:31:47.632 "trtype": "tcp", 00:31:47.632 "traddr": "10.0.0.2", 00:31:47.632 "adrfam": "ipv4", 00:31:47.632 "trsvcid": "4420", 00:31:47.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:47.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:47.632 "hdgst": false, 00:31:47.632 "ddgst": false 00:31:47.632 }, 00:31:47.632 "method": "bdev_nvme_attach_controller" 00:31:47.632 },{ 00:31:47.632 "params": { 00:31:47.632 "name": "Nvme2", 00:31:47.632 "trtype": "tcp", 00:31:47.632 "traddr": "10.0.0.2", 00:31:47.632 "adrfam": "ipv4", 00:31:47.632 "trsvcid": "4420", 00:31:47.632 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:47.632 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:47.632 "hdgst": false, 00:31:47.632 "ddgst": false 00:31:47.632 }, 00:31:47.632 "method": "bdev_nvme_attach_controller" 00:31:47.632 }' 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:47.632 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.632 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:47.632 ... 00:31:47.632 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:47.632 ... 00:31:47.632 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:47.632 ... 00:31:47.632 fio-3.35 00:31:47.632 Starting 24 threads 00:31:47.632 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.847 00:31:59.847 filename0: (groupid=0, jobs=1): err= 0: pid=2713404: Wed Jul 24 19:08:43 2024 00:31:59.847 read: IOPS=422, BW=1692KiB/s (1733kB/s)(16.6MiB/10024msec) 00:31:59.847 slat (nsec): min=9537, max=83198, avg=19303.07, stdev=8919.53 00:31:59.847 clat (usec): min=9175, max=40148, avg=37673.63, stdev=2611.42 00:31:59.847 lat (usec): min=9186, max=40166, avg=37692.94, stdev=2611.08 00:31:59.847 clat percentiles (usec): 00:31:59.847 | 1.00th=[27395], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.847 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.847 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:31:59.847 | 99.00th=[39060], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:31:59.847 | 99.99th=[40109] 00:31:59.847 bw ( KiB/s): min= 1660, max= 1920, per=4.19%, avg=1688.95, stdev=67.55, samples=20 00:31:59.847 iops : min= 415, max= 480, avg=422.20, stdev=16.83, samples=20 00:31:59.847 lat (msec) : 10=0.38%, 20=0.38%, 50=99.25% 00:31:59.847 cpu : usr=98.91%, sys=0.70%, ctx=7, majf=0, minf=9 00:31:59.847 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.847 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.847 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.847 filename0: (groupid=0, jobs=1): err= 0: pid=2713405: Wed Jul 24 19:08:43 2024 00:31:59.847 read: IOPS=421, BW=1687KiB/s (1727kB/s)(16.5MiB/10017msec) 00:31:59.847 slat (nsec): min=6474, max=67553, avg=28974.84, stdev=10508.90 00:31:59.847 clat (usec): min=10204, max=47946, avg=37716.28, stdev=1938.03 00:31:59.847 lat (usec): min=10223, max=47964, avg=37745.25, stdev=1938.25 00:31:59.847 clat percentiles (usec): 00:31:59.847 | 1.00th=[29754], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.847 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.847 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.847 | 99.00th=[39060], 99.50th=[39584], 99.90th=[47973], 99.95th=[47973], 00:31:59.847 | 99.99th=[47973] 00:31:59.847 bw ( KiB/s): min= 1660, max= 1795, per=4.18%, avg=1682.15, stdev=47.81, samples=20 00:31:59.847 iops : min= 415, max= 448, avg=420.50, stdev=11.86, samples=20 00:31:59.847 lat (msec) : 20=0.38%, 50=99.62% 00:31:59.847 cpu : usr=98.58%, sys=1.01%, ctx=20, majf=0, minf=9 00:31:59.847 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:59.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.847 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.847 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.847 filename0: (groupid=0, jobs=1): err= 0: pid=2713406: Wed Jul 24 19:08:43 2024 00:31:59.847 read: IOPS=418, BW=1675KiB/s (1716kB/s)(16.4MiB/10008msec) 00:31:59.847 slat (nsec): min=8439, max=91205, avg=34484.22, stdev=22299.51 00:31:59.847 clat (usec): min=21481, max=76112, avg=37933.28, stdev=3372.47 00:31:59.847 lat (usec): min=21501, max=76139, avg=37967.76, stdev=3371.68 00:31:59.847 clat percentiles (usec): 00:31:59.847 | 1.00th=[22676], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.847 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.847 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:31:59.847 | 99.00th=[53740], 99.50th=[59507], 99.90th=[61080], 99.95th=[61604], 00:31:59.847 | 99.99th=[76022] 00:31:59.847 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1669.68, stdev=51.97, samples=19 00:31:59.847 iops : min= 384, max= 448, avg=417.42, stdev=12.99, samples=19 00:31:59.847 lat (msec) : 50=97.95%, 100=2.05% 00:31:59.847 cpu : usr=98.29%, sys=1.27%, ctx=22, majf=0, minf=9 00:31:59.847 IO depths : 1=5.2%, 2=11.4%, 4=24.5%, 8=51.6%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:59.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.847 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.847 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.847 filename0: (groupid=0, jobs=1): err= 0: pid=2713407: Wed Jul 24 19:08:43 2024 00:31:59.847 read: IOPS=418, BW=1675KiB/s (1715kB/s)(16.4MiB/10009msec) 00:31:59.847 slat (nsec): min=6118, max=69380, avg=34843.52, stdev=9665.92 00:31:59.847 clat (usec): min=24938, max=72995, avg=37886.04, stdev=2328.10 00:31:59.847 lat (usec): min=24975, max=73009, avg=37920.88, stdev=2326.91 00:31:59.847 clat percentiles (usec): 00:31:59.847 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.847 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:31:59.847 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.847 | 99.00th=[39060], 99.50th=[40109], 99.90th=[72877], 99.95th=[72877], 00:31:59.847 | 99.99th=[72877] 00:31:59.847 bw ( KiB/s): min= 1536, max= 1795, per=4.14%, avg=1669.75, stdev=50.95, samples=20 00:31:59.847 iops : min= 384, max= 448, avg=417.40, stdev=12.64, samples=20 00:31:59.847 lat (msec) : 50=99.62%, 100=0.38% 00:31:59.847 cpu : usr=98.40%, sys=1.21%, ctx=18, majf=0, minf=9 00:31:59.847 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.847 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.847 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.847 filename0: (groupid=0, jobs=1): err= 0: pid=2713408: Wed Jul 24 19:08:43 2024 00:31:59.847 read: IOPS=419, BW=1679KiB/s (1720kB/s)(16.4MiB/10023msec) 00:31:59.847 slat (nsec): min=6271, max=68036, avg=32816.02, stdev=10372.41 00:31:59.847 clat (usec): min=25064, max=49486, avg=37844.67, stdev=1148.47 00:31:59.847 lat (usec): min=25087, max=49503, avg=37877.49, stdev=1147.59 00:31:59.847 clat percentiles (usec): 00:31:59.847 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.847 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.847 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.847 | 99.00th=[39060], 99.50th=[40109], 99.90th=[49546], 99.95th=[49546], 00:31:59.847 | 99.99th=[49546] 00:31:59.847 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1673.85, stdev=57.65, samples=20 00:31:59.847 iops : min= 384, max= 448, avg=418.45, stdev=14.42, samples=20 00:31:59.848 lat (msec) : 50=100.00% 00:31:59.848 cpu : usr=98.61%, sys=1.00%, ctx=17, majf=0, minf=9 00:31:59.848 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.848 filename0: (groupid=0, jobs=1): err= 0: pid=2713409: Wed Jul 24 19:08:43 2024 00:31:59.848 read: IOPS=420, BW=1680KiB/s (1720kB/s)(16.4MiB/10018msec) 00:31:59.848 slat (nsec): min=9288, max=50334, avg=12598.37, stdev=2630.03 00:31:59.848 clat (usec): min=22698, max=54382, avg=37974.66, stdev=928.85 00:31:59.848 lat (usec): min=22708, max=54409, avg=37987.26, stdev=928.94 00:31:59.848 clat percentiles (usec): 00:31:59.848 | 1.00th=[37487], 5.00th=[37487], 10.00th=[37487], 20.00th=[38011], 00:31:59.848 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.848 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:31:59.848 | 99.00th=[38536], 99.50th=[39584], 99.90th=[52691], 99.95th=[53216], 00:31:59.848 | 99.99th=[54264] 00:31:59.848 bw ( KiB/s): min= 1660, max= 1792, per=4.16%, avg=1675.60, stdev=39.85, samples=20 00:31:59.848 iops : min= 415, max= 448, avg=418.90, stdev= 9.96, samples=20 00:31:59.848 lat (msec) : 50=99.86%, 100=0.14% 00:31:59.848 cpu : usr=98.71%, sys=0.90%, ctx=13, majf=0, minf=11 00:31:59.848 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.848 filename0: (groupid=0, jobs=1): err= 0: pid=2713410: Wed Jul 24 19:08:43 2024 00:31:59.848 read: IOPS=419, BW=1677KiB/s (1717kB/s)(16.4MiB/10017msec) 00:31:59.848 slat (nsec): min=5650, max=91757, avg=40366.29, stdev=21289.66 00:31:59.848 clat (usec): min=16479, max=53596, avg=37725.40, stdev=1389.44 00:31:59.848 lat (usec): min=16496, max=53611, avg=37765.77, stdev=1389.84 00:31:59.848 clat percentiles (usec): 00:31:59.848 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.848 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:31:59.848 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.848 | 99.00th=[38536], 99.50th=[39584], 99.90th=[53740], 99.95th=[53740], 00:31:59.848 | 99.99th=[53740] 00:31:59.848 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1675.30, stdev=56.75, samples=20 00:31:59.848 iops : min= 384, max= 448, avg=418.80, stdev=14.20, samples=20 00:31:59.848 lat (msec) : 20=0.19%, 50=99.43%, 100=0.38% 00:31:59.848 cpu : usr=98.48%, sys=1.12%, ctx=13, majf=0, minf=9 00:31:59.848 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=49.9%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 issued rwts: total=4200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.848 filename0: (groupid=0, jobs=1): err= 0: pid=2713411: Wed Jul 24 19:08:43 2024 00:31:59.848 read: IOPS=418, BW=1676KiB/s (1716kB/s)(16.4MiB/10006msec) 00:31:59.848 slat (nsec): min=4706, max=51226, avg=23577.00, stdev=7218.22 00:31:59.848 clat (usec): min=31463, max=64559, avg=37964.77, stdev=1719.96 00:31:59.848 lat (usec): min=31472, max=64572, avg=37988.35, stdev=1719.10 00:31:59.848 clat percentiles (usec): 00:31:59.848 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.848 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.848 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:31:59.848 | 99.00th=[39060], 99.50th=[39584], 99.90th=[64750], 99.95th=[64750], 00:31:59.848 | 99.99th=[64750] 00:31:59.848 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1669.68, stdev=51.97, samples=19 00:31:59.848 iops : min= 384, max= 448, avg=417.42, stdev=12.99, samples=19 00:31:59.848 lat (msec) : 50=99.62%, 100=0.38% 00:31:59.848 cpu : usr=98.75%, sys=0.85%, ctx=14, majf=0, minf=9 00:31:59.848 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.848 filename1: (groupid=0, jobs=1): err= 0: pid=2713412: Wed Jul 24 19:08:43 2024 00:31:59.848 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.4MiB/10024msec) 00:31:59.848 slat (nsec): min=4486, max=53948, avg=24976.59, stdev=6801.82 00:31:59.848 clat (usec): min=28314, max=54698, avg=37889.23, stdev=788.89 00:31:59.848 lat (usec): min=28325, max=54713, avg=37914.21, stdev=788.32 00:31:59.848 clat percentiles (usec): 00:31:59.848 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.848 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.848 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:31:59.848 | 99.00th=[39060], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:31:59.848 | 99.99th=[54789] 00:31:59.848 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1676.21, stdev=58.58, samples=19 00:31:59.848 iops : min= 384, max= 448, avg=419.05, stdev=14.65, samples=19 00:31:59.848 lat (msec) : 50=99.95%, 100=0.05% 00:31:59.848 cpu : usr=98.84%, sys=0.76%, ctx=13, majf=0, minf=9 00:31:59.848 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.848 filename1: (groupid=0, jobs=1): err= 0: pid=2713413: Wed Jul 24 19:08:43 2024 00:31:59.848 read: IOPS=419, BW=1679KiB/s (1720kB/s)(16.4MiB/10023msec) 00:31:59.848 slat (nsec): min=5671, max=63852, avg=33616.53, stdev=10105.21 00:31:59.848 clat (usec): min=25196, max=49107, avg=37830.79, stdev=1087.73 00:31:59.848 lat (usec): min=25218, max=49122, avg=37864.41, stdev=1086.83 00:31:59.848 clat percentiles (usec): 00:31:59.848 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.848 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:31:59.848 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.848 | 99.00th=[39060], 99.50th=[40109], 99.90th=[49021], 99.95th=[49021], 00:31:59.848 | 99.99th=[49021] 00:31:59.848 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1673.85, stdev=57.65, samples=20 00:31:59.848 iops : min= 384, max= 448, avg=418.45, stdev=14.42, samples=20 00:31:59.848 lat (msec) : 50=100.00% 00:31:59.848 cpu : usr=98.64%, sys=0.97%, ctx=11, majf=0, minf=9 00:31:59.848 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.848 filename1: (groupid=0, jobs=1): err= 0: pid=2713414: Wed Jul 24 19:08:43 2024 00:31:59.848 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10010msec) 00:31:59.848 slat (nsec): min=4912, max=89517, avg=22799.35, stdev=12949.07 00:31:59.848 clat (usec): min=21811, max=72952, avg=37770.49, stdev=4078.59 00:31:59.848 lat (usec): min=21820, max=72967, avg=37793.29, stdev=4077.19 00:31:59.848 clat percentiles (usec): 00:31:59.848 | 1.00th=[29230], 5.00th=[31327], 10.00th=[32375], 20.00th=[37487], 00:31:59.848 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.848 | 70.00th=[38011], 80.00th=[38011], 90.00th=[43254], 95.00th=[44303], 00:31:59.848 | 99.00th=[46400], 99.50th=[54264], 99.90th=[72877], 99.95th=[72877], 00:31:59.848 | 99.99th=[72877] 00:31:59.848 bw ( KiB/s): min= 1536, max= 1728, per=4.18%, avg=1684.95, stdev=45.09, samples=20 00:31:59.848 iops : min= 384, max= 432, avg=421.20, stdev=11.27, samples=20 00:31:59.848 lat (msec) : 50=99.38%, 100=0.62% 00:31:59.848 cpu : usr=98.56%, sys=1.05%, ctx=12, majf=0, minf=10 00:31:59.848 IO depths : 1=1.7%, 2=3.4%, 4=8.4%, 8=72.7%, 16=13.8%, 32=0.0%, >=64=0.0% 00:31:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 complete : 0=0.0%, 4=90.3%, 8=7.0%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.848 filename1: (groupid=0, jobs=1): err= 0: pid=2713415: Wed Jul 24 19:08:43 2024 00:31:59.848 read: IOPS=421, BW=1684KiB/s (1725kB/s)(16.5MiB/10009msec) 00:31:59.848 slat (nsec): min=4702, max=88217, avg=28930.02, stdev=12836.02 00:31:59.848 clat (usec): min=24275, max=80044, avg=37770.65, stdev=3216.66 00:31:59.848 lat (usec): min=24288, max=80057, avg=37799.58, stdev=3215.81 00:31:59.848 clat percentiles (usec): 00:31:59.848 | 1.00th=[28443], 5.00th=[32375], 10.00th=[36963], 20.00th=[37487], 00:31:59.848 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:31:59.848 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[43254], 00:31:59.848 | 99.00th=[46400], 99.50th=[55837], 99.90th=[62653], 99.95th=[62653], 00:31:59.848 | 99.99th=[80217] 00:31:59.848 bw ( KiB/s): min= 1504, max= 1795, per=4.17%, avg=1678.55, stdev=62.48, samples=20 00:31:59.848 iops : min= 376, max= 448, avg=419.60, stdev=15.55, samples=20 00:31:59.848 lat (msec) : 50=99.43%, 100=0.57% 00:31:59.848 cpu : usr=98.96%, sys=0.65%, ctx=15, majf=0, minf=9 00:31:59.848 IO depths : 1=4.0%, 2=8.1%, 4=17.3%, 8=60.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:59.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 complete : 0=0.0%, 4=92.3%, 8=3.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.848 issued rwts: total=4214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.848 filename1: (groupid=0, jobs=1): err= 0: pid=2713416: Wed Jul 24 19:08:43 2024 00:31:59.848 read: IOPS=418, BW=1675KiB/s (1716kB/s)(16.4MiB/10008msec) 00:31:59.848 slat (nsec): min=6145, max=91932, avg=37604.94, stdev=21608.56 00:31:59.849 clat (usec): min=22284, max=76305, avg=37858.01, stdev=1655.95 00:31:59.849 lat (usec): min=22302, max=76322, avg=37895.61, stdev=1653.89 00:31:59.849 clat percentiles (usec): 00:31:59.849 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.849 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:31:59.849 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.849 | 99.00th=[38536], 99.50th=[39584], 99.90th=[61080], 99.95th=[61080], 00:31:59.849 | 99.99th=[76022] 00:31:59.849 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1669.68, stdev=51.97, samples=19 00:31:59.849 iops : min= 384, max= 448, avg=417.42, stdev=12.99, samples=19 00:31:59.849 lat (msec) : 50=99.62%, 100=0.38% 00:31:59.849 cpu : usr=98.83%, sys=0.78%, ctx=9, majf=0, minf=9 00:31:59.849 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.849 filename1: (groupid=0, jobs=1): err= 0: pid=2713417: Wed Jul 24 19:08:43 2024 00:31:59.849 read: IOPS=420, BW=1681KiB/s (1721kB/s)(16.4MiB/10015msec) 00:31:59.849 slat (nsec): min=5719, max=91088, avg=39403.33, stdev=21514.35 00:31:59.849 clat (usec): min=15695, max=51992, avg=37673.86, stdev=1642.81 00:31:59.849 lat (usec): min=15714, max=52007, avg=37713.26, stdev=1644.04 00:31:59.849 clat percentiles (usec): 00:31:59.849 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.849 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:31:59.849 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.849 | 99.00th=[38536], 99.50th=[39584], 99.90th=[52167], 99.95th=[52167], 00:31:59.849 | 99.99th=[52167] 00:31:59.849 bw ( KiB/s): min= 1539, max= 1792, per=4.16%, avg=1675.45, stdev=56.37, samples=20 00:31:59.849 iops : min= 384, max= 448, avg=418.80, stdev=14.20, samples=20 00:31:59.849 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:31:59.849 cpu : usr=98.65%, sys=0.96%, ctx=12, majf=0, minf=9 00:31:59.849 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.849 filename1: (groupid=0, jobs=1): err= 0: pid=2713418: Wed Jul 24 19:08:43 2024 00:31:59.849 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.4MiB/10028msec) 00:31:59.849 slat (nsec): min=11272, max=50335, avg=25271.04, stdev=6883.22 00:31:59.849 clat (usec): min=31339, max=49588, avg=37906.35, stdev=879.00 00:31:59.849 lat (usec): min=31360, max=49609, avg=37931.62, stdev=878.51 00:31:59.849 clat percentiles (usec): 00:31:59.849 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.849 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.849 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:31:59.849 | 99.00th=[39060], 99.50th=[39584], 99.90th=[49546], 99.95th=[49546], 00:31:59.849 | 99.99th=[49546] 00:31:59.849 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1675.60, stdev=57.09, samples=20 00:31:59.849 iops : min= 384, max= 448, avg=418.90, stdev=14.27, samples=20 00:31:59.849 lat (msec) : 50=100.00% 00:31:59.849 cpu : usr=98.72%, sys=0.88%, ctx=9, majf=0, minf=9 00:31:59.849 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.849 filename1: (groupid=0, jobs=1): err= 0: pid=2713419: Wed Jul 24 19:08:43 2024 00:31:59.849 read: IOPS=419, BW=1676KiB/s (1717kB/s)(16.4MiB/10002msec) 00:31:59.849 slat (nsec): min=6255, max=72958, avg=30776.28, stdev=11312.97 00:31:59.849 clat (usec): min=36344, max=55067, avg=37893.48, stdev=1104.46 00:31:59.849 lat (usec): min=36382, max=55084, avg=37924.26, stdev=1103.17 00:31:59.849 clat percentiles (usec): 00:31:59.849 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.849 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:31:59.849 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.849 | 99.00th=[38536], 99.50th=[40109], 99.90th=[54789], 99.95th=[55313], 00:31:59.849 | 99.99th=[55313] 00:31:59.849 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1676.42, stdev=58.04, samples=19 00:31:59.849 iops : min= 384, max= 448, avg=419.11, stdev=14.51, samples=19 00:31:59.849 lat (msec) : 50=99.62%, 100=0.38% 00:31:59.849 cpu : usr=98.42%, sys=1.01%, ctx=99, majf=0, minf=9 00:31:59.849 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.849 filename2: (groupid=0, jobs=1): err= 0: pid=2713420: Wed Jul 24 19:08:43 2024 00:31:59.849 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.4MiB/10024msec) 00:31:59.849 slat (nsec): min=4920, max=49961, avg=24188.31, stdev=6881.87 00:31:59.849 clat (usec): min=28221, max=54415, avg=37905.56, stdev=775.87 00:31:59.849 lat (usec): min=28232, max=54429, avg=37929.75, stdev=774.88 00:31:59.849 clat percentiles (usec): 00:31:59.849 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.849 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.849 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:31:59.849 | 99.00th=[39060], 99.50th=[39584], 99.90th=[44827], 99.95th=[44827], 00:31:59.849 | 99.99th=[54264] 00:31:59.849 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1676.21, stdev=58.58, samples=19 00:31:59.849 iops : min= 384, max= 448, avg=419.05, stdev=14.65, samples=19 00:31:59.849 lat (msec) : 50=99.95%, 100=0.05% 00:31:59.849 cpu : usr=98.75%, sys=0.85%, ctx=12, majf=0, minf=9 00:31:59.849 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.849 filename2: (groupid=0, jobs=1): err= 0: pid=2713421: Wed Jul 24 19:08:43 2024 00:31:59.849 read: IOPS=418, BW=1676KiB/s (1716kB/s)(16.4MiB/10012msec) 00:31:59.849 slat (nsec): min=4791, max=91941, avg=37630.37, stdev=21491.96 00:31:59.849 clat (usec): min=15746, max=81713, avg=37825.71, stdev=3862.30 00:31:59.849 lat (usec): min=15765, max=81728, avg=37863.34, stdev=3861.92 00:31:59.849 clat percentiles (usec): 00:31:59.849 | 1.00th=[22414], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:31:59.849 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:31:59.849 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.849 | 99.00th=[54264], 99.50th=[58459], 99.90th=[65274], 99.95th=[65274], 00:31:59.849 | 99.99th=[81265] 00:31:59.849 bw ( KiB/s): min= 1456, max= 1792, per=4.15%, avg=1670.20, stdev=70.06, samples=20 00:31:59.849 iops : min= 364, max= 448, avg=417.55, stdev=17.52, samples=20 00:31:59.849 lat (msec) : 20=0.38%, 50=97.59%, 100=2.03% 00:31:59.849 cpu : usr=98.81%, sys=0.79%, ctx=13, majf=0, minf=9 00:31:59.849 IO depths : 1=4.7%, 2=10.5%, 4=23.6%, 8=53.3%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:59.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 issued rwts: total=4194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.849 filename2: (groupid=0, jobs=1): err= 0: pid=2713422: Wed Jul 24 19:08:43 2024 00:31:59.849 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.4MiB/10028msec) 00:31:59.849 slat (nsec): min=9501, max=49872, avg=18544.01, stdev=7365.85 00:31:59.849 clat (usec): min=31435, max=49540, avg=37982.21, stdev=868.86 00:31:59.849 lat (usec): min=31453, max=49560, avg=38000.76, stdev=868.08 00:31:59.849 clat percentiles (usec): 00:31:59.849 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[38011], 00:31:59.849 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.849 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:31:59.849 | 99.00th=[39584], 99.50th=[39584], 99.90th=[49546], 99.95th=[49546], 00:31:59.849 | 99.99th=[49546] 00:31:59.849 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1675.60, stdev=57.09, samples=20 00:31:59.849 iops : min= 384, max= 448, avg=418.90, stdev=14.27, samples=20 00:31:59.849 lat (msec) : 50=100.00% 00:31:59.849 cpu : usr=98.18%, sys=1.34%, ctx=31, majf=0, minf=9 00:31:59.849 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.849 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.849 filename2: (groupid=0, jobs=1): err= 0: pid=2713423: Wed Jul 24 19:08:43 2024 00:31:59.849 read: IOPS=419, BW=1680KiB/s (1720kB/s)(16.4MiB/10022msec) 00:31:59.849 slat (nsec): min=4533, max=50877, avg=24780.88, stdev=7008.04 00:31:59.849 clat (usec): min=26126, max=60864, avg=37881.35, stdev=915.07 00:31:59.849 lat (usec): min=26137, max=60878, avg=37906.13, stdev=914.83 00:31:59.849 clat percentiles (usec): 00:31:59.849 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.849 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:31:59.849 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:31:59.849 | 99.00th=[39060], 99.50th=[39584], 99.90th=[43779], 99.95th=[49546], 00:31:59.849 | 99.99th=[61080] 00:31:59.849 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1676.58, stdev=58.50, samples=19 00:31:59.849 iops : min= 384, max= 448, avg=419.11, stdev=14.63, samples=19 00:31:59.849 lat (msec) : 50=99.95%, 100=0.05% 00:31:59.850 cpu : usr=98.75%, sys=0.85%, ctx=11, majf=0, minf=9 00:31:59.850 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.850 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.850 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.850 filename2: (groupid=0, jobs=1): err= 0: pid=2713424: Wed Jul 24 19:08:43 2024 00:31:59.850 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.4MiB/10001msec) 00:31:59.850 slat (nsec): min=4408, max=66845, avg=33929.69, stdev=10404.10 00:31:59.850 clat (usec): min=24694, max=73319, avg=37830.32, stdev=1733.91 00:31:59.850 lat (usec): min=24704, max=73332, avg=37864.25, stdev=1733.75 00:31:59.850 clat percentiles (usec): 00:31:59.850 | 1.00th=[34341], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.850 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:31:59.850 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.850 | 99.00th=[39060], 99.50th=[52167], 99.90th=[55837], 99.95th=[55837], 00:31:59.850 | 99.99th=[72877] 00:31:59.850 bw ( KiB/s): min= 1587, max= 1792, per=4.17%, avg=1679.26, stdev=52.01, samples=19 00:31:59.850 iops : min= 396, max= 448, avg=419.74, stdev=13.09, samples=19 00:31:59.850 lat (msec) : 50=99.43%, 100=0.57% 00:31:59.850 cpu : usr=98.86%, sys=0.74%, ctx=11, majf=0, minf=9 00:31:59.850 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:59.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.850 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.850 issued rwts: total=4198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.850 filename2: (groupid=0, jobs=1): err= 0: pid=2713425: Wed Jul 24 19:08:43 2024 00:31:59.850 read: IOPS=421, BW=1687KiB/s (1727kB/s)(16.5MiB/10017msec) 00:31:59.850 slat (nsec): min=9868, max=99222, avg=42769.06, stdev=13342.61 00:31:59.850 clat (usec): min=9199, max=40110, avg=37538.89, stdev=1864.02 00:31:59.850 lat (usec): min=9220, max=40141, avg=37581.66, stdev=1865.98 00:31:59.850 clat percentiles (usec): 00:31:59.850 | 1.00th=[31065], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:31:59.850 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:31:59.850 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.850 | 99.00th=[38536], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:31:59.850 | 99.99th=[40109] 00:31:59.850 bw ( KiB/s): min= 1660, max= 1792, per=4.18%, avg=1682.00, stdev=46.87, samples=20 00:31:59.850 iops : min= 415, max= 448, avg=420.50, stdev=11.72, samples=20 00:31:59.850 lat (msec) : 10=0.17%, 20=0.21%, 50=99.62% 00:31:59.850 cpu : usr=98.08%, sys=1.41%, ctx=16, majf=0, minf=9 00:31:59.850 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.850 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.850 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.850 filename2: (groupid=0, jobs=1): err= 0: pid=2713426: Wed Jul 24 19:08:43 2024 00:31:59.850 read: IOPS=422, BW=1691KiB/s (1732kB/s)(16.6MiB/10027msec) 00:31:59.850 slat (nsec): min=5669, max=91803, avg=32803.23, stdev=11749.31 00:31:59.850 clat (usec): min=10116, max=40088, avg=37574.81, stdev=2414.92 00:31:59.850 lat (usec): min=10148, max=40117, avg=37607.61, stdev=2415.91 00:31:59.850 clat percentiles (usec): 00:31:59.850 | 1.00th=[27395], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:31:59.850 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:31:59.850 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.850 | 99.00th=[39060], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:31:59.850 | 99.99th=[40109] 00:31:59.850 bw ( KiB/s): min= 1660, max= 1920, per=4.19%, avg=1688.60, stdev=66.99, samples=20 00:31:59.850 iops : min= 415, max= 480, avg=422.15, stdev=16.75, samples=20 00:31:59.850 lat (msec) : 20=0.71%, 50=99.29% 00:31:59.850 cpu : usr=98.51%, sys=1.04%, ctx=9, majf=0, minf=9 00:31:59.850 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:59.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.850 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.850 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.850 filename2: (groupid=0, jobs=1): err= 0: pid=2713427: Wed Jul 24 19:08:43 2024 00:31:59.850 read: IOPS=420, BW=1680KiB/s (1721kB/s)(16.4MiB/10008msec) 00:31:59.850 slat (usec): min=10, max=100, avg=38.54, stdev=13.75 00:31:59.850 clat (usec): min=23868, max=92935, avg=37712.48, stdev=2957.73 00:31:59.850 lat (usec): min=23889, max=92960, avg=37751.02, stdev=2957.53 00:31:59.850 clat percentiles (usec): 00:31:59.850 | 1.00th=[28181], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:31:59.850 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:31:59.850 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:31:59.850 | 99.00th=[39584], 99.50th=[57410], 99.90th=[72877], 99.95th=[72877], 00:31:59.850 | 99.99th=[92799] 00:31:59.850 bw ( KiB/s): min= 1635, max= 1795, per=4.16%, avg=1674.70, stdev=41.14, samples=20 00:31:59.850 iops : min= 408, max= 448, avg=418.60, stdev=10.21, samples=20 00:31:59.850 lat (msec) : 50=99.43%, 100=0.57% 00:31:59.850 cpu : usr=98.19%, sys=1.35%, ctx=13, majf=0, minf=9 00:31:59.850 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:59.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.850 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.850 issued rwts: total=4204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.850 00:31:59.850 Run status group 0 (all jobs): 00:31:59.850 READ: bw=39.3MiB/s (41.3MB/s), 1675KiB/s-1692KiB/s (1715kB/s-1733kB/s), io=395MiB (414MB), run=10001-10028msec 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:59.850 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 bdev_null0 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 [2024-07-24 19:08:43.634112] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 bdev_null1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:59.851 { 00:31:59.851 "params": { 00:31:59.851 "name": "Nvme$subsystem", 00:31:59.851 "trtype": "$TEST_TRANSPORT", 00:31:59.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:59.851 "adrfam": "ipv4", 00:31:59.851 "trsvcid": "$NVMF_PORT", 00:31:59.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:59.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:59.851 "hdgst": ${hdgst:-false}, 00:31:59.851 "ddgst": ${ddgst:-false} 00:31:59.851 }, 00:31:59.851 "method": "bdev_nvme_attach_controller" 00:31:59.851 } 00:31:59.851 EOF 00:31:59.851 )") 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:59.851 { 00:31:59.851 "params": { 00:31:59.851 "name": "Nvme$subsystem", 00:31:59.851 "trtype": "$TEST_TRANSPORT", 00:31:59.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:59.851 "adrfam": "ipv4", 00:31:59.851 "trsvcid": "$NVMF_PORT", 00:31:59.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:59.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:59.851 "hdgst": ${hdgst:-false}, 00:31:59.851 "ddgst": ${ddgst:-false} 00:31:59.851 }, 00:31:59.851 "method": "bdev_nvme_attach_controller" 00:31:59.851 } 00:31:59.851 EOF 00:31:59.851 )") 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:59.851 19:08:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:59.851 "params": { 00:31:59.851 "name": "Nvme0", 00:31:59.851 "trtype": "tcp", 00:31:59.852 "traddr": "10.0.0.2", 00:31:59.852 "adrfam": "ipv4", 00:31:59.852 "trsvcid": "4420", 00:31:59.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:59.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:59.852 "hdgst": false, 00:31:59.852 "ddgst": false 00:31:59.852 }, 00:31:59.852 "method": "bdev_nvme_attach_controller" 00:31:59.852 },{ 00:31:59.852 "params": { 00:31:59.852 "name": "Nvme1", 00:31:59.852 "trtype": "tcp", 00:31:59.852 "traddr": "10.0.0.2", 00:31:59.852 "adrfam": "ipv4", 00:31:59.852 "trsvcid": "4420", 00:31:59.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:59.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:59.852 "hdgst": false, 00:31:59.852 "ddgst": false 00:31:59.852 }, 00:31:59.852 "method": "bdev_nvme_attach_controller" 00:31:59.852 }' 00:31:59.852 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:31:59.852 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:31:59.852 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:31:59.852 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:59.852 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:31:59.852 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:31:59.852 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:31:59.852 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:31:59.852 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:59.852 19:08:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:59.852 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:59.852 ... 00:31:59.852 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:59.852 ... 00:31:59.852 fio-3.35 00:31:59.852 Starting 4 threads 00:31:59.852 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.162 00:32:05.162 filename0: (groupid=0, jobs=1): err= 0: pid=2715652: Wed Jul 24 19:08:50 2024 00:32:05.162 read: IOPS=1774, BW=13.9MiB/s (14.5MB/s)(69.4MiB/5003msec) 00:32:05.162 slat (nsec): min=9615, max=73376, avg=19881.19, stdev=8805.73 00:32:05.162 clat (usec): min=1940, max=45374, avg=4449.70, stdev=1403.70 00:32:05.162 lat (usec): min=1965, max=45399, avg=4469.58, stdev=1403.70 00:32:05.162 clat percentiles (usec): 00:32:05.162 | 1.00th=[ 2966], 5.00th=[ 3359], 10.00th=[ 3589], 20.00th=[ 4047], 00:32:05.162 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:32:05.162 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5932], 00:32:05.162 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 7570], 99.95th=[45351], 00:32:05.162 | 99.99th=[45351] 00:32:05.162 bw ( KiB/s): min=12160, max=15472, per=25.47%, avg=14186.67, stdev=961.03, samples=9 00:32:05.162 iops : min= 1520, max= 1934, avg=1773.33, stdev=120.13, samples=9 00:32:05.162 lat (msec) : 2=0.01%, 4=19.18%, 10=80.71%, 50=0.09% 00:32:05.162 cpu : usr=96.94%, sys=2.62%, ctx=11, majf=0, minf=11 00:32:05.162 IO depths : 1=0.1%, 2=3.2%, 4=69.5%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.162 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.162 issued rwts: total=8877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:05.162 filename0: (groupid=0, jobs=1): err= 0: pid=2715653: Wed Jul 24 19:08:50 2024 00:32:05.162 read: IOPS=1746, BW=13.6MiB/s (14.3MB/s)(68.3MiB/5002msec) 00:32:05.162 slat (usec): min=9, max=110, avg=29.55, stdev= 9.79 00:32:05.162 clat (usec): min=1166, max=7549, avg=4505.00, stdev=530.05 00:32:05.162 lat (usec): min=1211, max=7571, avg=4534.55, stdev=529.60 00:32:05.162 clat percentiles (usec): 00:32:05.162 | 1.00th=[ 3326], 5.00th=[ 3851], 10.00th=[ 4047], 20.00th=[ 4228], 00:32:05.162 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4490], 00:32:05.162 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5538], 00:32:05.162 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7177], 99.95th=[ 7242], 00:32:05.162 | 99.99th=[ 7570] 00:32:05.162 bw ( KiB/s): min=13536, max=14416, per=25.07%, avg=13960.89, stdev=307.99, samples=9 00:32:05.162 iops : min= 1692, max= 1802, avg=1745.11, stdev=38.50, samples=9 00:32:05.162 lat (msec) : 2=0.05%, 4=8.46%, 10=91.50% 00:32:05.162 cpu : usr=96.46%, sys=3.02%, ctx=9, majf=0, minf=9 00:32:05.162 IO depths : 1=0.1%, 2=1.9%, 4=66.9%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.162 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.162 issued rwts: total=8737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:05.162 filename1: (groupid=0, jobs=1): err= 0: pid=2715654: Wed Jul 24 19:08:50 2024 00:32:05.162 read: IOPS=1734, BW=13.5MiB/s (14.2MB/s)(67.8MiB/5003msec) 00:32:05.162 slat (usec): min=10, max=110, avg=29.86, stdev=10.54 00:32:05.162 clat (usec): min=1592, max=7869, avg=4535.98, stdev=571.03 00:32:05.162 lat (usec): min=1613, max=7890, avg=4565.85, stdev=569.89 00:32:05.162 clat percentiles (usec): 00:32:05.162 | 1.00th=[ 3294], 5.00th=[ 3884], 10.00th=[ 4080], 20.00th=[ 4228], 00:32:05.162 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4490], 00:32:05.162 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 5735], 00:32:05.162 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7439], 00:32:05.162 | 99.99th=[ 7898] 00:32:05.162 bw ( KiB/s): min=13424, max=14144, per=24.82%, avg=13820.44, stdev=253.08, samples=9 00:32:05.162 iops : min= 1678, max= 1768, avg=1727.56, stdev=31.64, samples=9 00:32:05.162 lat (msec) : 2=0.09%, 4=7.43%, 10=92.47% 00:32:05.162 cpu : usr=96.20%, sys=3.24%, ctx=9, majf=0, minf=9 00:32:05.162 IO depths : 1=0.1%, 2=1.6%, 4=67.7%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.162 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.162 issued rwts: total=8677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:05.162 filename1: (groupid=0, jobs=1): err= 0: pid=2715655: Wed Jul 24 19:08:50 2024 00:32:05.162 read: IOPS=1706, BW=13.3MiB/s (14.0MB/s)(66.7MiB/5002msec) 00:32:05.162 slat (nsec): min=8264, max=76409, avg=17521.86, stdev=8040.26 00:32:05.162 clat (usec): min=1590, max=8317, avg=4636.84, stdev=662.18 00:32:05.162 lat (usec): min=1600, max=8341, avg=4654.36, stdev=661.02 00:32:05.162 clat percentiles (usec): 00:32:05.162 | 1.00th=[ 3589], 5.00th=[ 4015], 10.00th=[ 4146], 20.00th=[ 4293], 00:32:05.162 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:32:05.162 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5145], 95.00th=[ 6456], 00:32:05.162 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 7570], 99.95th=[ 7963], 00:32:05.162 | 99.99th=[ 8291] 00:32:05.162 bw ( KiB/s): min=12896, max=14208, per=24.57%, avg=13686.56, stdev=498.31, samples=9 00:32:05.162 iops : min= 1612, max= 1776, avg=1710.78, stdev=62.30, samples=9 00:32:05.162 lat (msec) : 2=0.06%, 4=4.50%, 10=95.44% 00:32:05.162 cpu : usr=97.10%, sys=2.54%, ctx=8, majf=0, minf=9 00:32:05.162 IO depths : 1=0.1%, 2=1.4%, 4=72.4%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.162 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.162 issued rwts: total=8537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:05.162 00:32:05.162 Run status group 0 (all jobs): 00:32:05.162 READ: bw=54.4MiB/s (57.0MB/s), 13.3MiB/s-13.9MiB/s (14.0MB/s-14.5MB/s), io=272MiB (285MB), run=5002-5003msec 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.422 00:32:05.422 real 0m24.801s 00:32:05.422 user 5m8.646s 00:32:05.422 sys 0m4.604s 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:05.422 19:08:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 ************************************ 00:32:05.422 END TEST fio_dif_rand_params 00:32:05.422 ************************************ 00:32:05.422 19:08:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:05.422 19:08:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:05.422 19:08:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.422 19:08:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 ************************************ 00:32:05.422 START TEST fio_dif_digest 00:32:05.422 ************************************ 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 bdev_null0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:05.422 [2024-07-24 19:08:50.382138] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:05.422 { 00:32:05.422 "params": { 00:32:05.422 "name": "Nvme$subsystem", 00:32:05.422 "trtype": "$TEST_TRANSPORT", 00:32:05.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.422 "adrfam": "ipv4", 00:32:05.422 "trsvcid": "$NVMF_PORT", 00:32:05.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.422 "hdgst": ${hdgst:-false}, 00:32:05.422 "ddgst": ${ddgst:-false} 00:32:05.422 }, 00:32:05.422 "method": "bdev_nvme_attach_controller" 00:32:05.422 } 00:32:05.422 EOF 00:32:05.422 )") 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:05.422 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local sanitizers 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # shift 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local asan_lib= 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libasan 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:05.423 "params": { 00:32:05.423 "name": "Nvme0", 00:32:05.423 "trtype": "tcp", 00:32:05.423 "traddr": "10.0.0.2", 00:32:05.423 "adrfam": "ipv4", 00:32:05.423 "trsvcid": "4420", 00:32:05.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:05.423 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:05.423 "hdgst": true, 00:32:05.423 "ddgst": true 00:32:05.423 }, 00:32:05.423 "method": "bdev_nvme_attach_controller" 00:32:05.423 }' 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:32:05.423 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.714 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.714 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:32:05.714 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:32:05.714 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:32:05.714 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:32:05.714 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:05.714 19:08:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.990 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:05.990 ... 00:32:05.990 fio-3.35 00:32:05.990 Starting 3 threads 00:32:05.990 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.194 00:32:18.194 filename0: (groupid=0, jobs=1): err= 0: pid=2716861: Wed Jul 24 19:09:01 2024 00:32:18.194 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(234MiB/10046msec) 00:32:18.194 slat (nsec): min=9679, max=57772, avg=16730.93, stdev=2899.65 00:32:18.194 clat (usec): min=10561, max=53851, avg=16047.54, stdev=1863.63 00:32:18.194 lat (usec): min=10577, max=53867, avg=16064.27, stdev=1863.61 00:32:18.194 clat percentiles (usec): 00:32:18.194 | 1.00th=[11338], 5.00th=[13304], 10.00th=[14353], 20.00th=[15008], 00:32:18.194 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16057], 60.00th=[16450], 00:32:18.194 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:32:18.194 | 99.00th=[19006], 99.50th=[19792], 99.90th=[50070], 99.95th=[53740], 00:32:18.194 | 99.99th=[53740] 00:32:18.194 bw ( KiB/s): min=23040, max=25088, per=33.93%, avg=23948.80, stdev=547.64, samples=20 00:32:18.194 iops : min= 180, max= 196, avg=187.10, stdev= 4.28, samples=20 00:32:18.194 lat (msec) : 20=99.73%, 50=0.16%, 100=0.11% 00:32:18.194 cpu : usr=94.91%, sys=4.16%, ctx=608, majf=0, minf=169 00:32:18.194 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.194 issued rwts: total=1873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.194 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:18.194 filename0: (groupid=0, jobs=1): err= 0: pid=2716862: Wed Jul 24 19:09:01 2024 00:32:18.194 read: IOPS=173, BW=21.7MiB/s (22.8MB/s)(217MiB/10003msec) 00:32:18.194 slat (nsec): min=9712, max=27548, avg=15427.15, stdev=2338.94 00:32:18.194 clat (usec): min=12902, max=60787, avg=17237.48, stdev=5019.41 00:32:18.194 lat (usec): min=12919, max=60802, avg=17252.91, stdev=5019.41 00:32:18.194 clat percentiles (usec): 00:32:18.194 | 1.00th=[13698], 5.00th=[14615], 10.00th=[15139], 20.00th=[15664], 00:32:18.194 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16712], 60.00th=[16909], 00:32:18.194 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18482], 95.00th=[19006], 00:32:18.194 | 99.00th=[57934], 99.50th=[58459], 99.90th=[60031], 99.95th=[60556], 00:32:18.194 | 99.99th=[60556] 00:32:18.194 bw ( KiB/s): min=18944, max=23808, per=31.52%, avg=22245.05, stdev=1378.32, samples=19 00:32:18.194 iops : min= 148, max= 186, avg=173.79, stdev=10.77, samples=19 00:32:18.194 lat (msec) : 20=98.10%, 50=0.52%, 100=1.38% 00:32:18.194 cpu : usr=95.96%, sys=3.68%, ctx=23, majf=0, minf=89 00:32:18.194 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.194 issued rwts: total=1739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.194 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:18.194 filename0: (groupid=0, jobs=1): err= 0: pid=2716863: Wed Jul 24 19:09:01 2024 00:32:18.194 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(241MiB/10046msec) 00:32:18.194 slat (nsec): min=9662, max=62778, avg=15164.32, stdev=2653.68 00:32:18.194 clat (usec): min=8761, max=51698, avg=15597.96, stdev=1918.10 00:32:18.194 lat (usec): min=8778, max=51715, avg=15613.13, stdev=1918.11 00:32:18.194 clat percentiles (usec): 00:32:18.194 | 1.00th=[10159], 5.00th=[12780], 10.00th=[13960], 20.00th=[14615], 00:32:18.194 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15795], 60.00th=[16057], 00:32:18.194 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:32:18.194 | 99.00th=[18482], 99.50th=[19006], 99.90th=[49546], 99.95th=[51643], 00:32:18.194 | 99.99th=[51643] 00:32:18.194 bw ( KiB/s): min=23552, max=26624, per=34.92%, avg=24642.35, stdev=910.27, samples=20 00:32:18.194 iops : min= 184, max= 208, avg=192.50, stdev= 7.13, samples=20 00:32:18.194 lat (msec) : 10=0.83%, 20=99.01%, 50=0.10%, 100=0.05% 00:32:18.194 cpu : usr=95.95%, sys=3.68%, ctx=25, majf=0, minf=50 00:32:18.194 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.194 issued rwts: total=1927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.194 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:18.194 00:32:18.194 Run status group 0 (all jobs): 00:32:18.194 READ: bw=68.9MiB/s (72.3MB/s), 21.7MiB/s-24.0MiB/s (22.8MB/s-25.1MB/s), io=692MiB (726MB), run=10003-10046msec 00:32:18.194 19:09:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:18.194 19:09:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:18.194 19:09:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:18.194 19:09:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:18.194 19:09:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:18.194 19:09:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:18.194 19:09:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.194 19:09:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:18.195 19:09:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.195 19:09:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:18.195 19:09:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.195 19:09:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:18.195 19:09:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.195 00:32:18.195 real 0m11.423s 00:32:18.195 user 0m41.389s 00:32:18.195 sys 0m1.548s 00:32:18.195 19:09:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:18.195 19:09:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:18.195 ************************************ 00:32:18.195 END TEST fio_dif_digest 00:32:18.195 ************************************ 00:32:18.195 19:09:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:18.195 19:09:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.195 rmmod nvme_tcp 00:32:18.195 rmmod nvme_fabrics 00:32:18.195 rmmod nvme_keyring 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2707303 ']' 00:32:18.195 19:09:01 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2707303 00:32:18.195 19:09:01 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2707303 ']' 00:32:18.195 19:09:01 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2707303 00:32:18.195 19:09:01 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:32:18.195 19:09:01 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.195 19:09:01 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2707303 00:32:18.195 19:09:01 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:18.195 19:09:01 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:18.195 19:09:01 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2707303' 00:32:18.195 killing process with pid 2707303 00:32:18.195 19:09:01 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2707303 00:32:18.195 19:09:01 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2707303 00:32:18.195 19:09:02 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:18.195 19:09:02 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:20.097 Waiting for block devices as requested 00:32:20.097 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:32:20.097 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:20.097 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:20.356 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:20.356 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:20.356 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:20.614 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:20.614 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:20.614 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:20.873 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:20.873 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:20.873 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:20.873 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:21.132 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:21.132 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:21.132 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:21.391 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:21.391 19:09:06 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:21.391 19:09:06 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:21.391 19:09:06 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:21.391 19:09:06 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:21.391 19:09:06 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.391 19:09:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:21.391 19:09:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.925 19:09:08 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:23.925 00:32:23.925 real 1m16.234s 00:32:23.925 user 7m44.737s 00:32:23.925 sys 0m19.254s 00:32:23.925 19:09:08 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:23.925 19:09:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:23.925 ************************************ 00:32:23.925 END TEST nvmf_dif 00:32:23.925 ************************************ 00:32:23.925 19:09:08 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:23.925 19:09:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:23.925 19:09:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.925 19:09:08 -- common/autotest_common.sh@10 -- # set +x 00:32:23.925 ************************************ 00:32:23.925 START TEST nvmf_abort_qd_sizes 00:32:23.925 ************************************ 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:23.925 * Looking for test storage... 00:32:23.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:23.925 19:09:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:29.201 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:29.201 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:29.201 Found net devices under 0000:af:00.0: cvl_0_0 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:29.201 Found net devices under 0000:af:00.1: cvl_0_1 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:29.201 19:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:29.201 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.202 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.202 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.202 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.202 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:29.202 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.461 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.461 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.461 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:29.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:32:29.461 00:32:29.461 --- 10.0.0.2 ping statistics --- 00:32:29.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.461 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:32:29.461 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:32:29.461 00:32:29.461 --- 10.0.0.1 ping statistics --- 00:32:29.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.461 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:32:29.461 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.461 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:29.461 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:29.461 19:09:14 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:31.997 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:31.997 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:32.968 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2725617 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2725617 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2725617 ']' 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:32.968 19:09:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:33.227 [2024-07-24 19:09:18.018618] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:32:33.227 [2024-07-24 19:09:18.018679] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.227 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.227 [2024-07-24 19:09:18.105138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:33.227 [2024-07-24 19:09:18.197911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.227 [2024-07-24 19:09:18.197955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.227 [2024-07-24 19:09:18.197969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.227 [2024-07-24 19:09:18.197980] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.227 [2024-07-24 19:09:18.197990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.227 [2024-07-24 19:09:18.198047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.227 [2024-07-24 19:09:18.198159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:33.227 [2024-07-24 19:09:18.198271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:33.227 [2024-07-24 19:09:18.198274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.163 19:09:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:34.163 19:09:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:34.163 19:09:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:34.163 19:09:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:34.163 19:09:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:86:00.0 ]] 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:86:00.0 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.163 19:09:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:34.163 ************************************ 00:32:34.163 START TEST spdk_target_abort 00:32:34.163 ************************************ 00:32:34.163 19:09:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:34.163 19:09:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:34.163 19:09:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:32:34.163 19:09:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.163 19:09:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.448 spdk_targetn1 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.448 [2024-07-24 19:09:21.908814] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.448 [2024-07-24 19:09:21.953147] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:37.448 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:37.449 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:37.449 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:37.449 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:37.449 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:37.449 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:37.449 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:37.449 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:37.449 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:37.449 19:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:37.449 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.736 Initializing NVMe Controllers 00:32:40.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:40.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:40.736 Initialization complete. Launching workers. 00:32:40.736 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6604, failed: 0 00:32:40.736 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1192, failed to submit 5412 00:32:40.736 success 731, unsuccess 461, failed 0 00:32:40.736 19:09:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:40.736 19:09:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:40.736 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.021 Initializing NVMe Controllers 00:32:44.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:44.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:44.022 Initialization complete. Launching workers. 00:32:44.022 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8517, failed: 0 00:32:44.022 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 7298 00:32:44.022 success 326, unsuccess 893, failed 0 00:32:44.022 19:09:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:44.022 19:09:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:44.022 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.303 Initializing NVMe Controllers 00:32:47.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:47.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:47.303 Initialization complete. Launching workers. 00:32:47.303 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17729, failed: 0 00:32:47.303 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1974, failed to submit 15755 00:32:47.303 success 146, unsuccess 1828, failed 0 00:32:47.304 19:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:47.304 19:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.304 19:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:47.304 19:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.304 19:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:47.304 19:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.304 19:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:48.237 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.237 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2725617 00:32:48.237 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2725617 ']' 00:32:48.237 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2725617 00:32:48.237 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:48.237 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:48.237 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2725617 00:32:48.497 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:48.497 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:48.497 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2725617' 00:32:48.497 killing process with pid 2725617 00:32:48.497 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2725617 00:32:48.497 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2725617 00:32:48.497 00:32:48.497 real 0m14.435s 00:32:48.497 user 0m57.944s 00:32:48.497 sys 0m2.186s 00:32:48.497 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:48.497 19:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:48.497 ************************************ 00:32:48.497 END TEST spdk_target_abort 00:32:48.497 ************************************ 00:32:48.755 19:09:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:48.755 19:09:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:48.755 19:09:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.755 19:09:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:48.755 ************************************ 00:32:48.755 START TEST kernel_target_abort 00:32:48.755 ************************************ 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:48.755 19:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:51.290 Waiting for block devices as requested 00:32:51.549 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:32:51.549 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:51.809 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:51.809 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:51.809 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:51.809 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:52.068 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:52.068 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:52.068 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:52.327 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:52.327 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:52.327 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:52.327 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:52.586 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:52.586 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:52.586 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:52.846 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:52.846 No valid GPT data, bailing 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:52.846 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:32:53.106 00:32:53.106 Discovery Log Number of Records 2, Generation counter 2 00:32:53.106 =====Discovery Log Entry 0====== 00:32:53.106 trtype: tcp 00:32:53.106 adrfam: ipv4 00:32:53.106 subtype: current discovery subsystem 00:32:53.106 treq: not specified, sq flow control disable supported 00:32:53.106 portid: 1 00:32:53.106 trsvcid: 4420 00:32:53.106 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:53.106 traddr: 10.0.0.1 00:32:53.106 eflags: none 00:32:53.106 sectype: none 00:32:53.106 =====Discovery Log Entry 1====== 00:32:53.106 trtype: tcp 00:32:53.106 adrfam: ipv4 00:32:53.106 subtype: nvme subsystem 00:32:53.106 treq: not specified, sq flow control disable supported 00:32:53.106 portid: 1 00:32:53.106 trsvcid: 4420 00:32:53.106 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:53.106 traddr: 10.0.0.1 00:32:53.106 eflags: none 00:32:53.106 sectype: none 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:53.106 19:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:53.106 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.499 Initializing NVMe Controllers 00:32:56.499 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:56.499 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:56.499 Initialization complete. Launching workers. 00:32:56.499 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 47621, failed: 0 00:32:56.499 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 47621, failed to submit 0 00:32:56.499 success 0, unsuccess 47621, failed 0 00:32:56.499 19:09:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:56.499 19:09:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:56.499 EAL: No free 2048 kB hugepages reported on node 1 00:32:59.799 Initializing NVMe Controllers 00:32:59.799 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:59.799 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:59.799 Initialization complete. Launching workers. 00:32:59.799 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81104, failed: 0 00:32:59.799 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20374, failed to submit 60730 00:32:59.799 success 0, unsuccess 20374, failed 0 00:32:59.799 19:09:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:59.799 19:09:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:59.799 EAL: No free 2048 kB hugepages reported on node 1 00:33:02.340 Initializing NVMe Controllers 00:33:02.340 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:02.340 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:02.340 Initialization complete. Launching workers. 00:33:02.340 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78102, failed: 0 00:33:02.340 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19514, failed to submit 58588 00:33:02.340 success 0, unsuccess 19514, failed 0 00:33:02.340 19:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:02.340 19:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:02.340 19:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:02.340 19:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:02.340 19:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:02.340 19:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:02.340 19:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:02.340 19:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:02.340 19:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:02.340 19:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:05.636 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:05.636 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:06.206 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:33:06.206 00:33:06.206 real 0m17.575s 00:33:06.206 user 0m8.189s 00:33:06.206 sys 0m5.222s 00:33:06.206 19:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:06.206 19:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:06.206 ************************************ 00:33:06.206 END TEST kernel_target_abort 00:33:06.206 ************************************ 00:33:06.206 19:09:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:06.206 19:09:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:06.206 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:06.206 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:06.206 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:06.206 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:06.206 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:06.206 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:06.206 rmmod nvme_tcp 00:33:06.206 rmmod nvme_fabrics 00:33:06.465 rmmod nvme_keyring 00:33:06.465 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:06.465 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:06.465 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:06.465 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2725617 ']' 00:33:06.465 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2725617 00:33:06.465 19:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2725617 ']' 00:33:06.465 19:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2725617 00:33:06.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2725617) - No such process 00:33:06.465 19:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2725617 is not found' 00:33:06.465 Process with pid 2725617 is not found 00:33:06.465 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:06.465 19:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:09.003 Waiting for block devices as requested 00:33:09.003 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:33:09.263 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:09.263 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:09.263 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:09.522 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:09.522 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:09.522 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:09.522 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:09.781 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:09.781 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:09.781 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:10.041 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:10.041 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:10.041 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:10.041 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:10.301 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:10.301 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:10.301 19:09:55 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:10.301 19:09:55 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:10.301 19:09:55 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:10.301 19:09:55 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:10.301 19:09:55 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.301 19:09:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:10.301 19:09:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.838 19:09:57 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:12.838 00:33:12.838 real 0m48.924s 00:33:12.838 user 1m10.321s 00:33:12.838 sys 0m15.925s 00:33:12.838 19:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:12.838 19:09:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:12.838 ************************************ 00:33:12.838 END TEST nvmf_abort_qd_sizes 00:33:12.838 ************************************ 00:33:12.838 19:09:57 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:12.838 19:09:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:12.838 19:09:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:12.838 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:33:12.838 ************************************ 00:33:12.838 START TEST keyring_file 00:33:12.838 ************************************ 00:33:12.838 19:09:57 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:12.838 * Looking for test storage... 00:33:12.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:12.838 19:09:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:12.838 19:09:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.838 19:09:57 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.838 19:09:57 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.838 19:09:57 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.838 19:09:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.838 19:09:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.838 19:09:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.838 19:09:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:12.838 19:09:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:12.838 19:09:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:12.838 19:09:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:12.838 19:09:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:12.838 19:09:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:12.838 19:09:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:12.838 19:09:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:12.838 19:09:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:12.838 19:09:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:12.838 19:09:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:12.838 19:09:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:12.838 19:09:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:12.838 19:09:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:12.838 19:09:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tV6oulXXtT 00:33:12.838 19:09:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:12.838 19:09:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:12.838 19:09:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tV6oulXXtT 00:33:12.839 19:09:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tV6oulXXtT 00:33:12.839 19:09:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tV6oulXXtT 00:33:12.839 19:09:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:12.839 19:09:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:12.839 19:09:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:12.839 19:09:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:12.839 19:09:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:12.839 19:09:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:12.839 19:09:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MP37YD39Ln 00:33:12.839 19:09:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:12.839 19:09:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:12.839 19:09:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:12.839 19:09:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:12.839 19:09:57 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:12.839 19:09:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:12.839 19:09:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:12.839 19:09:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MP37YD39Ln 00:33:12.839 19:09:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MP37YD39Ln 00:33:12.839 19:09:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.MP37YD39Ln 00:33:12.839 19:09:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=2735012 00:33:12.839 19:09:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:12.839 19:09:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2735012 00:33:12.839 19:09:57 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2735012 ']' 00:33:12.839 19:09:57 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.839 19:09:57 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:12.839 19:09:57 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.839 19:09:57 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:12.839 19:09:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:12.839 [2024-07-24 19:09:57.709076] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:33:12.839 [2024-07-24 19:09:57.709140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2735012 ] 00:33:12.839 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.839 [2024-07-24 19:09:57.790670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.098 [2024-07-24 19:09:57.881591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.665 19:09:58 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:13.665 19:09:58 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:13.665 19:09:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:13.665 19:09:58 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.665 19:09:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:13.665 [2024-07-24 19:09:58.660733] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.924 null0 00:33:13.924 [2024-07-24 19:09:58.692778] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:13.924 [2024-07-24 19:09:58.693170] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:13.924 [2024-07-24 19:09:58.700782] tcp.c:3771:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.924 19:09:58 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:13.924 [2024-07-24 19:09:58.712822] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:13.924 request: 00:33:13.924 { 00:33:13.924 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:13.924 "secure_channel": false, 00:33:13.924 "listen_address": { 00:33:13.924 "trtype": "tcp", 00:33:13.924 "traddr": "127.0.0.1", 00:33:13.924 "trsvcid": "4420" 00:33:13.924 }, 00:33:13.924 "method": "nvmf_subsystem_add_listener", 00:33:13.924 "req_id": 1 00:33:13.924 } 00:33:13.924 Got JSON-RPC error response 00:33:13.924 response: 00:33:13.924 { 00:33:13.924 "code": -32602, 00:33:13.924 "message": "Invalid parameters" 00:33:13.924 } 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:13.924 19:09:58 keyring_file -- keyring/file.sh@46 -- # bperfpid=2735113 00:33:13.924 19:09:58 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2735113 /var/tmp/bperf.sock 00:33:13.924 19:09:58 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2735113 ']' 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:13.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:13.924 19:09:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:13.924 [2024-07-24 19:09:58.765318] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:33:13.924 [2024-07-24 19:09:58.765361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2735113 ] 00:33:13.924 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.924 [2024-07-24 19:09:58.833900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.183 [2024-07-24 19:09:58.939945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.183 19:09:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:14.183 19:09:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:14.183 19:09:59 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tV6oulXXtT 00:33:14.183 19:09:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tV6oulXXtT 00:33:14.751 19:09:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MP37YD39Ln 00:33:14.751 19:09:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MP37YD39Ln 00:33:15.010 19:09:59 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:15.010 19:09:59 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:15.010 19:09:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:15.010 19:09:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:15.010 19:09:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.268 19:10:00 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.tV6oulXXtT == \/\t\m\p\/\t\m\p\.\t\V\6\o\u\l\X\X\t\T ]] 00:33:15.268 19:10:00 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:15.268 19:10:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:15.268 19:10:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:15.268 19:10:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:15.268 19:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.526 19:10:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.MP37YD39Ln == \/\t\m\p\/\t\m\p\.\M\P\3\7\Y\D\3\9\L\n ]] 00:33:15.526 19:10:00 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:15.526 19:10:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:15.526 19:10:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:15.526 19:10:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:15.526 19:10:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:15.526 19:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.823 19:10:00 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:15.823 19:10:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:15.823 19:10:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:15.823 19:10:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:15.823 19:10:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:15.823 19:10:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:15.823 19:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.823 19:10:00 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:15.823 19:10:00 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.823 19:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:16.390 [2024-07-24 19:10:01.263898] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:16.390 nvme0n1 00:33:16.390 19:10:01 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:16.390 19:10:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:16.390 19:10:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:16.390 19:10:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.390 19:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.390 19:10:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.648 19:10:01 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:16.648 19:10:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:16.648 19:10:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:16.648 19:10:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:16.648 19:10:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.648 19:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.648 19:10:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:16.907 19:10:01 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:16.907 19:10:01 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:17.164 Running I/O for 1 seconds... 00:33:18.101 00:33:18.101 Latency(us) 00:33:18.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.101 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:18.101 nvme0n1 : 1.01 6960.61 27.19 0.00 0.00 18316.50 7417.48 34317.03 00:33:18.101 =================================================================================================================== 00:33:18.101 Total : 6960.61 27.19 0.00 0.00 18316.50 7417.48 34317.03 00:33:18.101 0 00:33:18.101 19:10:03 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:18.101 19:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:18.359 19:10:03 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:18.359 19:10:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:18.359 19:10:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.359 19:10:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.359 19:10:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:18.359 19:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.617 19:10:03 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:18.617 19:10:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:18.617 19:10:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:18.617 19:10:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.617 19:10:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.617 19:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.617 19:10:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:18.876 19:10:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:18.876 19:10:03 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:18.876 19:10:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:18.876 19:10:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:18.876 19:10:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:18.876 19:10:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:18.876 19:10:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:18.876 19:10:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:18.876 19:10:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:18.876 19:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:19.443 [2024-07-24 19:10:04.219931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:19.443 [2024-07-24 19:10:04.220620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a8cd0 (107): Transport endpoint is not connected 00:33:19.443 [2024-07-24 19:10:04.221614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a8cd0 (9): Bad file descriptor 00:33:19.443 [2024-07-24 19:10:04.222612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:19.443 [2024-07-24 19:10:04.222628] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:19.443 [2024-07-24 19:10:04.222646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:19.443 request: 00:33:19.443 { 00:33:19.443 "name": "nvme0", 00:33:19.443 "trtype": "tcp", 00:33:19.443 "traddr": "127.0.0.1", 00:33:19.443 "adrfam": "ipv4", 00:33:19.443 "trsvcid": "4420", 00:33:19.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:19.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:19.444 "prchk_reftag": false, 00:33:19.444 "prchk_guard": false, 00:33:19.444 "hdgst": false, 00:33:19.444 "ddgst": false, 00:33:19.444 "psk": "key1", 00:33:19.444 "method": "bdev_nvme_attach_controller", 00:33:19.444 "req_id": 1 00:33:19.444 } 00:33:19.444 Got JSON-RPC error response 00:33:19.444 response: 00:33:19.444 { 00:33:19.444 "code": -5, 00:33:19.444 "message": "Input/output error" 00:33:19.444 } 00:33:19.444 19:10:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:19.444 19:10:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:19.444 19:10:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:19.444 19:10:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:19.444 19:10:04 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:19.444 19:10:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:19.444 19:10:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.444 19:10:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.444 19:10:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.444 19:10:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:19.702 19:10:04 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:19.702 19:10:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:19.702 19:10:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:19.702 19:10:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.702 19:10:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.702 19:10:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:19.702 19:10:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.961 19:10:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:19.961 19:10:04 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:19.961 19:10:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:20.528 19:10:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:20.528 19:10:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:20.786 19:10:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:20.786 19:10:05 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:20.786 19:10:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.045 19:10:06 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:21.045 19:10:06 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.tV6oulXXtT 00:33:21.045 19:10:06 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tV6oulXXtT 00:33:21.045 19:10:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:21.045 19:10:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tV6oulXXtT 00:33:21.045 19:10:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:21.045 19:10:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:21.045 19:10:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:21.045 19:10:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:21.045 19:10:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tV6oulXXtT 00:33:21.045 19:10:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tV6oulXXtT 00:33:21.304 [2024-07-24 19:10:06.259956] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tV6oulXXtT': 0100660 00:33:21.304 [2024-07-24 19:10:06.259997] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:21.304 request: 00:33:21.304 { 00:33:21.304 "name": "key0", 00:33:21.304 "path": "/tmp/tmp.tV6oulXXtT", 00:33:21.304 "method": "keyring_file_add_key", 00:33:21.304 "req_id": 1 00:33:21.304 } 00:33:21.304 Got JSON-RPC error response 00:33:21.304 response: 00:33:21.304 { 00:33:21.304 "code": -1, 00:33:21.304 "message": "Operation not permitted" 00:33:21.304 } 00:33:21.304 19:10:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:21.304 19:10:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:21.304 19:10:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:21.304 19:10:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:21.304 19:10:06 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.tV6oulXXtT 00:33:21.304 19:10:06 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tV6oulXXtT 00:33:21.304 19:10:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tV6oulXXtT 00:33:21.563 19:10:06 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.tV6oulXXtT 00:33:21.563 19:10:06 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:21.563 19:10:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:21.563 19:10:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:21.563 19:10:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:21.563 19:10:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.563 19:10:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:21.822 19:10:06 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:21.822 19:10:06 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:21.822 19:10:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:21.822 19:10:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:21.822 19:10:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:21.822 19:10:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:21.822 19:10:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:21.822 19:10:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:21.822 19:10:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:21.822 19:10:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.081 [2024-07-24 19:10:07.042120] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tV6oulXXtT': No such file or directory 00:33:22.081 [2024-07-24 19:10:07.042152] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:22.081 [2024-07-24 19:10:07.042189] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:22.081 [2024-07-24 19:10:07.042200] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:22.081 [2024-07-24 19:10:07.042210] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:22.081 request: 00:33:22.081 { 00:33:22.081 "name": "nvme0", 00:33:22.081 "trtype": "tcp", 00:33:22.081 "traddr": "127.0.0.1", 00:33:22.081 "adrfam": "ipv4", 00:33:22.081 "trsvcid": "4420", 00:33:22.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:22.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:22.081 "prchk_reftag": false, 00:33:22.081 "prchk_guard": false, 00:33:22.081 "hdgst": false, 00:33:22.081 "ddgst": false, 00:33:22.081 "psk": "key0", 00:33:22.081 "method": "bdev_nvme_attach_controller", 00:33:22.081 "req_id": 1 00:33:22.081 } 00:33:22.081 Got JSON-RPC error response 00:33:22.081 response: 00:33:22.081 { 00:33:22.081 "code": -19, 00:33:22.081 "message": "No such device" 00:33:22.081 } 00:33:22.081 19:10:07 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:22.081 19:10:07 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:22.081 19:10:07 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:22.081 19:10:07 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:22.081 19:10:07 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:22.081 19:10:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:22.650 19:10:07 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:22.650 19:10:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:22.650 19:10:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:22.650 19:10:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:22.650 19:10:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:22.650 19:10:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:22.650 19:10:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4szHyo0C2G 00:33:22.651 19:10:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:22.651 19:10:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:22.651 19:10:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:22.651 19:10:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:22.651 19:10:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:22.651 19:10:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:22.651 19:10:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:22.651 19:10:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4szHyo0C2G 00:33:22.651 19:10:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4szHyo0C2G 00:33:22.651 19:10:07 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.4szHyo0C2G 00:33:22.651 19:10:07 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4szHyo0C2G 00:33:22.651 19:10:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4szHyo0C2G 00:33:22.910 19:10:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.910 19:10:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:23.168 nvme0n1 00:33:23.428 19:10:08 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:23.428 19:10:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:23.428 19:10:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:23.428 19:10:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.428 19:10:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.428 19:10:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.428 19:10:08 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:23.428 19:10:08 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:23.428 19:10:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:23.687 19:10:08 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:23.687 19:10:08 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:23.687 19:10:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.687 19:10:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.687 19:10:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.946 19:10:08 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:23.946 19:10:08 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:23.946 19:10:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:23.946 19:10:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:23.946 19:10:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.946 19:10:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.946 19:10:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:24.205 19:10:09 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:24.205 19:10:09 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:24.205 19:10:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:24.773 19:10:09 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:24.773 19:10:09 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:24.773 19:10:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:25.032 19:10:09 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:25.032 19:10:09 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4szHyo0C2G 00:33:25.032 19:10:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4szHyo0C2G 00:33:25.600 19:10:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MP37YD39Ln 00:33:25.600 19:10:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MP37YD39Ln 00:33:25.860 19:10:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:25.860 19:10:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:26.120 nvme0n1 00:33:26.120 19:10:10 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:26.120 19:10:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:26.379 19:10:11 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:26.379 "subsystems": [ 00:33:26.379 { 00:33:26.379 "subsystem": "keyring", 00:33:26.379 "config": [ 00:33:26.379 { 00:33:26.379 "method": "keyring_file_add_key", 00:33:26.379 "params": { 00:33:26.379 "name": "key0", 00:33:26.379 "path": "/tmp/tmp.4szHyo0C2G" 00:33:26.379 } 00:33:26.379 }, 00:33:26.379 { 00:33:26.379 "method": "keyring_file_add_key", 00:33:26.379 "params": { 00:33:26.379 "name": "key1", 00:33:26.379 "path": "/tmp/tmp.MP37YD39Ln" 00:33:26.379 } 00:33:26.379 } 00:33:26.379 ] 00:33:26.379 }, 00:33:26.379 { 00:33:26.379 "subsystem": "iobuf", 00:33:26.379 "config": [ 00:33:26.379 { 00:33:26.379 "method": "iobuf_set_options", 00:33:26.379 "params": { 00:33:26.379 "small_pool_count": 8192, 00:33:26.379 "large_pool_count": 1024, 00:33:26.379 "small_bufsize": 8192, 00:33:26.379 "large_bufsize": 135168 00:33:26.379 } 00:33:26.379 } 00:33:26.379 ] 00:33:26.379 }, 00:33:26.379 { 00:33:26.379 "subsystem": "sock", 00:33:26.379 "config": [ 00:33:26.379 { 00:33:26.379 "method": "sock_set_default_impl", 00:33:26.379 "params": { 00:33:26.379 "impl_name": "posix" 00:33:26.379 } 00:33:26.379 }, 00:33:26.379 { 00:33:26.379 "method": "sock_impl_set_options", 00:33:26.379 "params": { 00:33:26.379 "impl_name": "ssl", 00:33:26.379 "recv_buf_size": 4096, 00:33:26.379 "send_buf_size": 4096, 00:33:26.379 "enable_recv_pipe": true, 00:33:26.379 "enable_quickack": false, 00:33:26.379 "enable_placement_id": 0, 00:33:26.379 "enable_zerocopy_send_server": true, 00:33:26.379 "enable_zerocopy_send_client": false, 00:33:26.379 "zerocopy_threshold": 0, 00:33:26.379 "tls_version": 0, 00:33:26.379 "enable_ktls": false 00:33:26.379 } 00:33:26.379 }, 00:33:26.379 { 00:33:26.379 "method": "sock_impl_set_options", 00:33:26.379 "params": { 00:33:26.379 "impl_name": "posix", 00:33:26.379 "recv_buf_size": 2097152, 00:33:26.379 "send_buf_size": 2097152, 00:33:26.379 "enable_recv_pipe": true, 00:33:26.379 "enable_quickack": false, 00:33:26.379 "enable_placement_id": 0, 00:33:26.379 "enable_zerocopy_send_server": true, 00:33:26.379 "enable_zerocopy_send_client": false, 00:33:26.379 "zerocopy_threshold": 0, 00:33:26.379 "tls_version": 0, 00:33:26.379 "enable_ktls": false 00:33:26.379 } 00:33:26.379 } 00:33:26.379 ] 00:33:26.379 }, 00:33:26.379 { 00:33:26.379 "subsystem": "vmd", 00:33:26.379 "config": [] 00:33:26.379 }, 00:33:26.379 { 00:33:26.379 "subsystem": "accel", 00:33:26.379 "config": [ 00:33:26.379 { 00:33:26.379 "method": "accel_set_options", 00:33:26.379 "params": { 00:33:26.379 "small_cache_size": 128, 00:33:26.379 "large_cache_size": 16, 00:33:26.379 "task_count": 2048, 00:33:26.379 "sequence_count": 2048, 00:33:26.379 "buf_count": 2048 00:33:26.379 } 00:33:26.379 } 00:33:26.379 ] 00:33:26.379 }, 00:33:26.379 { 00:33:26.379 "subsystem": "bdev", 00:33:26.379 "config": [ 00:33:26.379 { 00:33:26.379 "method": "bdev_set_options", 00:33:26.379 "params": { 00:33:26.379 "bdev_io_pool_size": 65535, 00:33:26.379 "bdev_io_cache_size": 256, 00:33:26.379 "bdev_auto_examine": true, 00:33:26.379 "iobuf_small_cache_size": 128, 00:33:26.379 "iobuf_large_cache_size": 16 00:33:26.379 } 00:33:26.379 }, 00:33:26.379 { 00:33:26.379 "method": "bdev_raid_set_options", 00:33:26.379 "params": { 00:33:26.379 "process_window_size_kb": 1024, 00:33:26.380 "process_max_bandwidth_mb_sec": 0 00:33:26.380 } 00:33:26.380 }, 00:33:26.380 { 00:33:26.380 "method": "bdev_iscsi_set_options", 00:33:26.380 "params": { 00:33:26.380 "timeout_sec": 30 00:33:26.380 } 00:33:26.380 }, 00:33:26.380 { 00:33:26.380 "method": "bdev_nvme_set_options", 00:33:26.380 "params": { 00:33:26.380 "action_on_timeout": "none", 00:33:26.380 "timeout_us": 0, 00:33:26.380 "timeout_admin_us": 0, 00:33:26.380 "keep_alive_timeout_ms": 10000, 00:33:26.380 "arbitration_burst": 0, 00:33:26.380 "low_priority_weight": 0, 00:33:26.380 "medium_priority_weight": 0, 00:33:26.380 "high_priority_weight": 0, 00:33:26.380 "nvme_adminq_poll_period_us": 10000, 00:33:26.380 "nvme_ioq_poll_period_us": 0, 00:33:26.380 "io_queue_requests": 512, 00:33:26.380 "delay_cmd_submit": true, 00:33:26.380 "transport_retry_count": 4, 00:33:26.380 "bdev_retry_count": 3, 00:33:26.380 "transport_ack_timeout": 0, 00:33:26.380 "ctrlr_loss_timeout_sec": 0, 00:33:26.380 "reconnect_delay_sec": 0, 00:33:26.380 "fast_io_fail_timeout_sec": 0, 00:33:26.380 "disable_auto_failback": false, 00:33:26.380 "generate_uuids": false, 00:33:26.380 "transport_tos": 0, 00:33:26.380 "nvme_error_stat": false, 00:33:26.380 "rdma_srq_size": 0, 00:33:26.380 "io_path_stat": false, 00:33:26.380 "allow_accel_sequence": false, 00:33:26.380 "rdma_max_cq_size": 0, 00:33:26.380 "rdma_cm_event_timeout_ms": 0, 00:33:26.380 "dhchap_digests": [ 00:33:26.380 "sha256", 00:33:26.380 "sha384", 00:33:26.380 "sha512" 00:33:26.380 ], 00:33:26.380 "dhchap_dhgroups": [ 00:33:26.380 "null", 00:33:26.380 "ffdhe2048", 00:33:26.380 "ffdhe3072", 00:33:26.380 "ffdhe4096", 00:33:26.380 "ffdhe6144", 00:33:26.380 "ffdhe8192" 00:33:26.380 ] 00:33:26.380 } 00:33:26.380 }, 00:33:26.380 { 00:33:26.380 "method": "bdev_nvme_attach_controller", 00:33:26.380 "params": { 00:33:26.380 "name": "nvme0", 00:33:26.380 "trtype": "TCP", 00:33:26.380 "adrfam": "IPv4", 00:33:26.380 "traddr": "127.0.0.1", 00:33:26.380 "trsvcid": "4420", 00:33:26.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.380 "prchk_reftag": false, 00:33:26.380 "prchk_guard": false, 00:33:26.380 "ctrlr_loss_timeout_sec": 0, 00:33:26.380 "reconnect_delay_sec": 0, 00:33:26.380 "fast_io_fail_timeout_sec": 0, 00:33:26.380 "psk": "key0", 00:33:26.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:26.380 "hdgst": false, 00:33:26.380 "ddgst": false 00:33:26.380 } 00:33:26.380 }, 00:33:26.380 { 00:33:26.380 "method": "bdev_nvme_set_hotplug", 00:33:26.380 "params": { 00:33:26.380 "period_us": 100000, 00:33:26.380 "enable": false 00:33:26.380 } 00:33:26.380 }, 00:33:26.380 { 00:33:26.380 "method": "bdev_wait_for_examine" 00:33:26.380 } 00:33:26.380 ] 00:33:26.380 }, 00:33:26.380 { 00:33:26.380 "subsystem": "nbd", 00:33:26.380 "config": [] 00:33:26.380 } 00:33:26.380 ] 00:33:26.380 }' 00:33:26.380 19:10:11 keyring_file -- keyring/file.sh@114 -- # killprocess 2735113 00:33:26.380 19:10:11 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2735113 ']' 00:33:26.380 19:10:11 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2735113 00:33:26.380 19:10:11 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:26.380 19:10:11 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:26.380 19:10:11 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2735113 00:33:26.380 19:10:11 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:26.380 19:10:11 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:26.380 19:10:11 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2735113' 00:33:26.380 killing process with pid 2735113 00:33:26.380 19:10:11 keyring_file -- common/autotest_common.sh@967 -- # kill 2735113 00:33:26.380 Received shutdown signal, test time was about 1.000000 seconds 00:33:26.380 00:33:26.380 Latency(us) 00:33:26.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.380 =================================================================================================================== 00:33:26.380 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.380 19:10:11 keyring_file -- common/autotest_common.sh@972 -- # wait 2735113 00:33:26.640 19:10:11 keyring_file -- keyring/file.sh@117 -- # bperfpid=2737534 00:33:26.640 19:10:11 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2737534 /var/tmp/bperf.sock 00:33:26.640 19:10:11 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2737534 ']' 00:33:26.640 19:10:11 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.640 19:10:11 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:26.640 19:10:11 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:26.640 19:10:11 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.640 19:10:11 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:26.640 "subsystems": [ 00:33:26.640 { 00:33:26.640 "subsystem": "keyring", 00:33:26.640 "config": [ 00:33:26.640 { 00:33:26.640 "method": "keyring_file_add_key", 00:33:26.640 "params": { 00:33:26.640 "name": "key0", 00:33:26.640 "path": "/tmp/tmp.4szHyo0C2G" 00:33:26.640 } 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "method": "keyring_file_add_key", 00:33:26.640 "params": { 00:33:26.640 "name": "key1", 00:33:26.640 "path": "/tmp/tmp.MP37YD39Ln" 00:33:26.640 } 00:33:26.640 } 00:33:26.640 ] 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "subsystem": "iobuf", 00:33:26.640 "config": [ 00:33:26.640 { 00:33:26.640 "method": "iobuf_set_options", 00:33:26.640 "params": { 00:33:26.640 "small_pool_count": 8192, 00:33:26.640 "large_pool_count": 1024, 00:33:26.640 "small_bufsize": 8192, 00:33:26.640 "large_bufsize": 135168 00:33:26.640 } 00:33:26.640 } 00:33:26.640 ] 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "subsystem": "sock", 00:33:26.640 "config": [ 00:33:26.640 { 00:33:26.640 "method": "sock_set_default_impl", 00:33:26.640 "params": { 00:33:26.640 "impl_name": "posix" 00:33:26.640 } 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "method": "sock_impl_set_options", 00:33:26.640 "params": { 00:33:26.640 "impl_name": "ssl", 00:33:26.640 "recv_buf_size": 4096, 00:33:26.640 "send_buf_size": 4096, 00:33:26.640 "enable_recv_pipe": true, 00:33:26.640 "enable_quickack": false, 00:33:26.640 "enable_placement_id": 0, 00:33:26.640 "enable_zerocopy_send_server": true, 00:33:26.640 "enable_zerocopy_send_client": false, 00:33:26.640 "zerocopy_threshold": 0, 00:33:26.640 "tls_version": 0, 00:33:26.640 "enable_ktls": false 00:33:26.640 } 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "method": "sock_impl_set_options", 00:33:26.640 "params": { 00:33:26.640 "impl_name": "posix", 00:33:26.640 "recv_buf_size": 2097152, 00:33:26.640 "send_buf_size": 2097152, 00:33:26.640 "enable_recv_pipe": true, 00:33:26.640 "enable_quickack": false, 00:33:26.640 "enable_placement_id": 0, 00:33:26.640 "enable_zerocopy_send_server": true, 00:33:26.640 "enable_zerocopy_send_client": false, 00:33:26.640 "zerocopy_threshold": 0, 00:33:26.640 "tls_version": 0, 00:33:26.640 "enable_ktls": false 00:33:26.640 } 00:33:26.640 } 00:33:26.640 ] 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "subsystem": "vmd", 00:33:26.640 "config": [] 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "subsystem": "accel", 00:33:26.640 "config": [ 00:33:26.640 { 00:33:26.640 "method": "accel_set_options", 00:33:26.640 "params": { 00:33:26.640 "small_cache_size": 128, 00:33:26.640 "large_cache_size": 16, 00:33:26.640 "task_count": 2048, 00:33:26.640 "sequence_count": 2048, 00:33:26.640 "buf_count": 2048 00:33:26.640 } 00:33:26.640 } 00:33:26.640 ] 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "subsystem": "bdev", 00:33:26.640 "config": [ 00:33:26.640 { 00:33:26.640 "method": "bdev_set_options", 00:33:26.640 "params": { 00:33:26.640 "bdev_io_pool_size": 65535, 00:33:26.640 "bdev_io_cache_size": 256, 00:33:26.640 "bdev_auto_examine": true, 00:33:26.640 "iobuf_small_cache_size": 128, 00:33:26.640 "iobuf_large_cache_size": 16 00:33:26.640 } 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "method": "bdev_raid_set_options", 00:33:26.640 "params": { 00:33:26.640 "process_window_size_kb": 1024, 00:33:26.640 "process_max_bandwidth_mb_sec": 0 00:33:26.640 } 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "method": "bdev_iscsi_set_options", 00:33:26.640 "params": { 00:33:26.640 "timeout_sec": 30 00:33:26.640 } 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "method": "bdev_nvme_set_options", 00:33:26.640 "params": { 00:33:26.640 "action_on_timeout": "none", 00:33:26.640 "timeout_us": 0, 00:33:26.640 "timeout_admin_us": 0, 00:33:26.640 "keep_alive_timeout_ms": 10000, 00:33:26.640 "arbitration_burst": 0, 00:33:26.640 "low_priority_weight": 0, 00:33:26.640 "medium_priority_weight": 0, 00:33:26.640 "high_priority_weight": 0, 00:33:26.640 "nvme_adminq_poll_period_us": 10000, 00:33:26.640 "nvme_ioq_poll_period_us": 0, 00:33:26.640 "io_queue_requests": 512, 00:33:26.640 "delay_cmd_submit": true, 00:33:26.640 "transport_retry_count": 4, 00:33:26.640 "bdev_retry_count": 3, 00:33:26.640 "transport_ack_timeout": 0, 00:33:26.640 "ctrlr_loss_timeout_sec": 0, 00:33:26.640 "reconnect_delay_sec": 0, 00:33:26.640 "fast_io_fail_timeout_sec": 0, 00:33:26.640 "disable_auto_failback": false, 00:33:26.640 "generate_uuids": false, 00:33:26.640 "transport_tos": 0, 00:33:26.640 "nvme_error_stat": false, 00:33:26.640 "rdma_srq_size": 0, 00:33:26.640 "io_path_stat": false, 00:33:26.640 "allow_accel_sequence": false, 00:33:26.640 "rdma_max_cq_size": 0, 00:33:26.640 "rdma_cm_event_timeout_ms": 0, 00:33:26.640 "dhchap_digests": [ 00:33:26.640 "sha256", 00:33:26.640 "sha384", 00:33:26.640 "sha512" 00:33:26.640 ], 00:33:26.640 "dhchap_dhgroups": [ 00:33:26.640 "null", 00:33:26.640 "ffdhe2048", 00:33:26.640 "ffdhe3072", 00:33:26.640 "ffdhe4096", 00:33:26.640 "ffdhe6144", 00:33:26.640 "ffdhe8192" 00:33:26.640 ] 00:33:26.640 } 00:33:26.640 }, 00:33:26.640 { 00:33:26.640 "method": "bdev_nvme_attach_controller", 00:33:26.640 "params": { 00:33:26.641 "name": "nvme0", 00:33:26.641 "trtype": "TCP", 00:33:26.641 "adrfam": "IPv4", 00:33:26.641 "traddr": "127.0.0.1", 00:33:26.641 "trsvcid": "4420", 00:33:26.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.641 "prchk_reftag": false, 00:33:26.641 "prchk_guard": false, 00:33:26.641 "ctrlr_loss_timeout_sec": 0, 00:33:26.641 "reconnect_delay_sec": 0, 00:33:26.641 "fast_io_fail_timeout_sec": 0, 00:33:26.641 "psk": "key0", 00:33:26.641 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:26.641 "hdgst": false, 00:33:26.641 "ddgst": false 00:33:26.641 } 00:33:26.641 }, 00:33:26.641 { 00:33:26.641 "method": "bdev_nvme_set_hotplug", 00:33:26.641 "params": { 00:33:26.641 "period_us": 100000, 00:33:26.641 "enable": false 00:33:26.641 } 00:33:26.641 }, 00:33:26.641 { 00:33:26.641 "method": "bdev_wait_for_examine" 00:33:26.641 } 00:33:26.641 ] 00:33:26.641 }, 00:33:26.641 { 00:33:26.641 "subsystem": "nbd", 00:33:26.641 "config": [] 00:33:26.641 } 00:33:26.641 ] 00:33:26.641 }' 00:33:26.641 19:10:11 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:26.641 19:10:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:26.641 [2024-07-24 19:10:11.626278] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:33:26.641 [2024-07-24 19:10:11.626393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2737534 ] 00:33:26.900 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.900 [2024-07-24 19:10:11.742290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.900 [2024-07-24 19:10:11.846076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.159 [2024-07-24 19:10:12.018715] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:27.727 19:10:12 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:27.727 19:10:12 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:27.727 19:10:12 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:27.727 19:10:12 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:27.727 19:10:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.986 19:10:12 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:27.986 19:10:12 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:27.986 19:10:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:27.986 19:10:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:27.986 19:10:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:27.986 19:10:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:27.986 19:10:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.245 19:10:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:28.245 19:10:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:28.245 19:10:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:28.245 19:10:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:28.245 19:10:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.245 19:10:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:28.245 19:10:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.504 19:10:13 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:28.504 19:10:13 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:28.504 19:10:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:28.504 19:10:13 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:28.764 19:10:13 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:28.764 19:10:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:28.764 19:10:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4szHyo0C2G /tmp/tmp.MP37YD39Ln 00:33:28.764 19:10:13 keyring_file -- keyring/file.sh@20 -- # killprocess 2737534 00:33:28.764 19:10:13 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2737534 ']' 00:33:28.764 19:10:13 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2737534 00:33:28.764 19:10:13 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:28.764 19:10:13 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:28.764 19:10:13 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2737534 00:33:28.764 19:10:13 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:28.764 19:10:13 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:28.764 19:10:13 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2737534' 00:33:28.764 killing process with pid 2737534 00:33:28.764 19:10:13 keyring_file -- common/autotest_common.sh@967 -- # kill 2737534 00:33:28.764 Received shutdown signal, test time was about 1.000000 seconds 00:33:28.764 00:33:28.764 Latency(us) 00:33:28.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.764 =================================================================================================================== 00:33:28.764 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:28.764 19:10:13 keyring_file -- common/autotest_common.sh@972 -- # wait 2737534 00:33:29.024 19:10:13 keyring_file -- keyring/file.sh@21 -- # killprocess 2735012 00:33:29.024 19:10:13 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2735012 ']' 00:33:29.024 19:10:13 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2735012 00:33:29.024 19:10:13 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:29.024 19:10:13 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:29.024 19:10:13 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2735012 00:33:29.024 19:10:13 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:29.024 19:10:13 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:29.024 19:10:13 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2735012' 00:33:29.024 killing process with pid 2735012 00:33:29.024 19:10:13 keyring_file -- common/autotest_common.sh@967 -- # kill 2735012 00:33:29.024 [2024-07-24 19:10:13.905940] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:29.024 19:10:13 keyring_file -- common/autotest_common.sh@972 -- # wait 2735012 00:33:29.283 00:33:29.283 real 0m16.841s 00:33:29.283 user 0m42.536s 00:33:29.283 sys 0m3.300s 00:33:29.283 19:10:14 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:29.283 19:10:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:29.283 ************************************ 00:33:29.283 END TEST keyring_file 00:33:29.283 ************************************ 00:33:29.283 19:10:14 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:29.283 19:10:14 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:29.283 19:10:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:29.283 19:10:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:29.283 19:10:14 -- common/autotest_common.sh@10 -- # set +x 00:33:29.543 ************************************ 00:33:29.543 START TEST keyring_linux 00:33:29.543 ************************************ 00:33:29.543 19:10:14 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:29.543 * Looking for test storage... 00:33:29.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.543 19:10:14 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.543 19:10:14 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.543 19:10:14 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.543 19:10:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.543 19:10:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.543 19:10:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.543 19:10:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:29.543 19:10:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:29.543 /tmp/:spdk-test:key0 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:29.543 19:10:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:29.543 19:10:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:29.543 /tmp/:spdk-test:key1 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2738096 00:33:29.543 19:10:14 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2738096 00:33:29.543 19:10:14 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2738096 ']' 00:33:29.543 19:10:14 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.543 19:10:14 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:29.543 19:10:14 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.543 19:10:14 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:29.543 19:10:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:29.804 [2024-07-24 19:10:14.581360] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:33:29.804 [2024-07-24 19:10:14.581426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738096 ] 00:33:29.804 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.804 [2024-07-24 19:10:14.661707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.804 [2024-07-24 19:10:14.754166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.097 19:10:14 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:30.097 19:10:14 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:30.097 19:10:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:30.097 19:10:14 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.097 19:10:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:30.097 [2024-07-24 19:10:14.979143] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.097 null0 00:33:30.097 [2024-07-24 19:10:15.011191] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:30.097 [2024-07-24 19:10:15.011613] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:30.097 19:10:15 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.097 19:10:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:30.097 635905361 00:33:30.097 19:10:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:30.097 897193828 00:33:30.097 19:10:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2738230 00:33:30.097 19:10:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2738230 /var/tmp/bperf.sock 00:33:30.097 19:10:15 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:30.097 19:10:15 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2738230 ']' 00:33:30.097 19:10:15 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:30.098 19:10:15 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:30.098 19:10:15 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:30.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:30.098 19:10:15 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:30.098 19:10:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:30.098 [2024-07-24 19:10:15.086547] Starting SPDK v24.09-pre git sha1 0bb5c21e2 / DPDK 24.03.0 initialization... 00:33:30.098 [2024-07-24 19:10:15.086608] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738230 ] 00:33:30.357 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.357 [2024-07-24 19:10:15.166433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.357 [2024-07-24 19:10:15.267089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.293 19:10:16 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:31.293 19:10:16 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:31.293 19:10:16 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:31.293 19:10:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:31.861 19:10:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:31.861 19:10:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:32.120 19:10:17 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:32.120 19:10:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:32.379 [2024-07-24 19:10:17.330302] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:32.638 nvme0n1 00:33:32.638 19:10:17 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:32.638 19:10:17 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:32.638 19:10:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:32.638 19:10:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:32.638 19:10:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:32.638 19:10:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.898 19:10:17 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:32.898 19:10:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:32.898 19:10:17 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:32.898 19:10:17 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:32.898 19:10:17 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.898 19:10:17 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:32.898 19:10:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.157 19:10:17 keyring_linux -- keyring/linux.sh@25 -- # sn=635905361 00:33:33.157 19:10:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:33.157 19:10:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:33.157 19:10:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 635905361 == \6\3\5\9\0\5\3\6\1 ]] 00:33:33.157 19:10:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 635905361 00:33:33.157 19:10:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:33.157 19:10:17 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.157 Running I/O for 1 seconds... 00:33:34.534 00:33:34.534 Latency(us) 00:33:34.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.534 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:34.534 nvme0n1 : 1.01 6886.23 26.90 0.00 0.00 18461.85 5064.15 23116.33 00:33:34.534 =================================================================================================================== 00:33:34.534 Total : 6886.23 26.90 0.00 0.00 18461.85 5064.15 23116.33 00:33:34.534 0 00:33:34.534 19:10:19 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:34.534 19:10:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:34.793 19:10:19 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:34.793 19:10:19 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:34.793 19:10:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:34.794 19:10:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:34.794 19:10:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:34.794 19:10:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:35.053 19:10:19 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:35.053 19:10:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:35.053 19:10:19 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:35.053 19:10:19 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:35.053 19:10:19 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:35.053 19:10:19 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:35.053 19:10:19 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:35.053 19:10:19 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:35.053 19:10:19 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:35.053 19:10:19 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:35.053 19:10:19 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:35.053 19:10:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:35.312 [2024-07-24 19:10:20.207227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:35.312 [2024-07-24 19:10:20.207511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a6c10 (107): Transport endpoint is not connected 00:33:35.312 [2024-07-24 19:10:20.208502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a6c10 (9): Bad file descriptor 00:33:35.312 [2024-07-24 19:10:20.209502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:35.312 [2024-07-24 19:10:20.209518] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:35.312 [2024-07-24 19:10:20.209530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:35.312 request: 00:33:35.312 { 00:33:35.312 "name": "nvme0", 00:33:35.312 "trtype": "tcp", 00:33:35.312 "traddr": "127.0.0.1", 00:33:35.312 "adrfam": "ipv4", 00:33:35.312 "trsvcid": "4420", 00:33:35.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:35.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:35.312 "prchk_reftag": false, 00:33:35.312 "prchk_guard": false, 00:33:35.312 "hdgst": false, 00:33:35.312 "ddgst": false, 00:33:35.312 "psk": ":spdk-test:key1", 00:33:35.312 "method": "bdev_nvme_attach_controller", 00:33:35.312 "req_id": 1 00:33:35.312 } 00:33:35.312 Got JSON-RPC error response 00:33:35.312 response: 00:33:35.312 { 00:33:35.312 "code": -5, 00:33:35.312 "message": "Input/output error" 00:33:35.312 } 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@33 -- # sn=635905361 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 635905361 00:33:35.312 1 links removed 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@33 -- # sn=897193828 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 897193828 00:33:35.312 1 links removed 00:33:35.312 19:10:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2738230 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2738230 ']' 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2738230 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2738230 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2738230' 00:33:35.312 killing process with pid 2738230 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@967 -- # kill 2738230 00:33:35.312 Received shutdown signal, test time was about 1.000000 seconds 00:33:35.312 00:33:35.312 Latency(us) 00:33:35.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.312 =================================================================================================================== 00:33:35.312 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:35.312 19:10:20 keyring_linux -- common/autotest_common.sh@972 -- # wait 2738230 00:33:35.571 19:10:20 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2738096 00:33:35.571 19:10:20 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2738096 ']' 00:33:35.571 19:10:20 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2738096 00:33:35.571 19:10:20 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:35.571 19:10:20 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:35.571 19:10:20 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2738096 00:33:35.571 19:10:20 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:35.571 19:10:20 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:35.571 19:10:20 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2738096' 00:33:35.571 killing process with pid 2738096 00:33:35.571 19:10:20 keyring_linux -- common/autotest_common.sh@967 -- # kill 2738096 00:33:35.571 19:10:20 keyring_linux -- common/autotest_common.sh@972 -- # wait 2738096 00:33:36.140 00:33:36.140 real 0m6.597s 00:33:36.140 user 0m13.835s 00:33:36.140 sys 0m1.553s 00:33:36.140 19:10:20 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:36.140 19:10:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:36.140 ************************************ 00:33:36.140 END TEST keyring_linux 00:33:36.140 ************************************ 00:33:36.140 19:10:20 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:36.140 19:10:20 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:36.140 19:10:20 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:36.140 19:10:20 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:36.140 19:10:20 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:36.140 19:10:20 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:36.140 19:10:20 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:36.140 19:10:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:36.140 19:10:20 -- common/autotest_common.sh@10 -- # set +x 00:33:36.140 19:10:20 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:36.140 19:10:20 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:33:36.140 19:10:20 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:33:36.140 19:10:20 -- common/autotest_common.sh@10 -- # set +x 00:33:41.415 INFO: APP EXITING 00:33:41.415 INFO: killing all VMs 00:33:41.415 INFO: killing vhost app 00:33:41.415 WARN: no vhost pid file found 00:33:41.415 INFO: EXIT DONE 00:33:43.951 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:33:43.951 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:33:43.951 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:33:43.951 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:33:43.951 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:33:43.951 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:33:44.210 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:47.501 Cleaning 00:33:47.501 Removing: /var/run/dpdk/spdk0/config 00:33:47.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:47.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:47.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:47.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:47.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:47.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:47.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:47.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:47.501 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:47.501 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:47.501 Removing: /var/run/dpdk/spdk1/config 00:33:47.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:47.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:47.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:47.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:47.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:47.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:47.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:47.501 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:47.501 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:47.501 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:47.501 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:47.501 Removing: /var/run/dpdk/spdk2/config 00:33:47.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:47.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:47.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:47.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:47.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:47.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:47.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:47.501 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:47.501 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:47.501 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:47.501 Removing: /var/run/dpdk/spdk3/config 00:33:47.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:47.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:47.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:47.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:47.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:47.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:47.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:47.501 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:47.501 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:47.501 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:47.501 Removing: /var/run/dpdk/spdk4/config 00:33:47.501 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:47.501 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:47.501 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:47.502 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:47.502 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:47.502 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:47.502 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:47.502 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:47.502 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:47.502 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:47.502 Removing: /dev/shm/bdev_svc_trace.1 00:33:47.502 Removing: /dev/shm/nvmf_trace.0 00:33:47.502 Removing: /dev/shm/spdk_tgt_trace.pid2297340 00:33:47.502 Removing: /var/run/dpdk/spdk0 00:33:47.502 Removing: /var/run/dpdk/spdk1 00:33:47.502 Removing: /var/run/dpdk/spdk2 00:33:47.502 Removing: /var/run/dpdk/spdk3 00:33:47.502 Removing: /var/run/dpdk/spdk4 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2294915 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2296140 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2297340 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2298036 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2299067 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2299129 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2300224 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2300489 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2300824 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2302592 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2304018 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2304384 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2305025 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2305363 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2305693 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2306107 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2306618 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2306955 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2308051 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2311437 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2311725 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2312009 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2312026 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2312638 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2312848 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2313407 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2313669 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2313962 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2314188 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2314338 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2314539 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2315154 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2315442 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2315761 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2316065 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2316179 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2316405 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2316683 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2316968 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2317247 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2317532 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2317810 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2318092 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2318375 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2318654 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2318941 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2319224 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2319503 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2319785 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2320066 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2320351 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2320631 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2320916 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2321198 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2321485 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2321765 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2322053 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2322360 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2322705 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2326756 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2331392 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2342477 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2343170 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2347682 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2348178 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2353181 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2359574 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2362783 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2374291 00:33:47.502 Removing: /var/run/dpdk/spdk_pid2383642 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2385707 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2386694 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2405665 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2409881 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2457230 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2462826 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2469345 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2476125 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2476130 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2476972 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2477960 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2479005 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2479537 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2479540 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2479808 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2480066 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2480072 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2481107 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2481902 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2482945 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2483477 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2483588 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2483969 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2485138 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2486246 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2494927 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2531554 00:33:47.761 Removing: /var/run/dpdk/spdk_pid2536981 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2538796 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2540891 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2541165 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2541443 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2541711 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2542548 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2544638 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2546021 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2546767 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2549148 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2549784 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2550616 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2555039 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2561018 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2561019 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2561020 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2565039 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2573803 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2578790 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2585238 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2586805 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2588494 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2593174 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2597477 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2605052 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2605062 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2610140 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2610403 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2610661 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2611171 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2611189 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2615742 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2616393 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2621230 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2624133 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2630170 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2636322 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2646528 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2653771 00:33:47.762 Removing: /var/run/dpdk/spdk_pid2653823 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2673841 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2674649 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2675434 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2675980 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2677200 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2678044 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2678925 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2679758 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2684460 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2684791 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2691084 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2691290 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2693745 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2702052 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2702144 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2707607 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2709602 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2711846 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2713067 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2715284 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2716504 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2726430 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2726958 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2727480 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2729937 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2730465 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2730993 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2735012 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2735113 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2737534 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2738096 00:33:48.021 Removing: /var/run/dpdk/spdk_pid2738230 00:33:48.021 Clean 00:33:48.021 19:10:32 -- common/autotest_common.sh@1449 -- # return 0 00:33:48.021 19:10:32 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:48.021 19:10:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:48.021 19:10:32 -- common/autotest_common.sh@10 -- # set +x 00:33:48.021 19:10:33 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:48.021 19:10:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:48.021 19:10:33 -- common/autotest_common.sh@10 -- # set +x 00:33:48.280 19:10:33 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:48.280 19:10:33 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:48.280 19:10:33 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:48.280 19:10:33 -- spdk/autotest.sh@391 -- # hash lcov 00:33:48.280 19:10:33 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:48.280 19:10:33 -- spdk/autotest.sh@393 -- # hostname 00:33:48.280 19:10:33 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:48.280 geninfo: WARNING: invalid characters removed from testname! 00:34:20.406 19:11:02 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:21.782 19:11:06 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:24.314 19:11:09 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:27.599 19:11:12 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:30.134 19:11:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:33.424 19:11:17 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:35.959 19:11:20 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:35.959 19:11:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.959 19:11:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:35.959 19:11:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.959 19:11:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.959 19:11:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.959 19:11:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.959 19:11:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.959 19:11:20 -- paths/export.sh@5 -- $ export PATH 00:34:35.959 19:11:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.960 19:11:20 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:35.960 19:11:20 -- common/autobuild_common.sh@447 -- $ date +%s 00:34:35.960 19:11:20 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721841080.XXXXXX 00:34:35.960 19:11:20 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721841080.XJXpiX 00:34:35.960 19:11:20 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:34:35.960 19:11:20 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:34:35.960 19:11:20 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:35.960 19:11:20 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:35.960 19:11:20 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:35.960 19:11:20 -- common/autobuild_common.sh@463 -- $ get_config_params 00:34:36.219 19:11:20 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:36.219 19:11:20 -- common/autotest_common.sh@10 -- $ set +x 00:34:36.219 19:11:20 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:36.219 19:11:20 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:34:36.219 19:11:20 -- pm/common@17 -- $ local monitor 00:34:36.219 19:11:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:36.219 19:11:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:36.219 19:11:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:36.219 19:11:20 -- pm/common@21 -- $ date +%s 00:34:36.219 19:11:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:36.219 19:11:20 -- pm/common@21 -- $ date +%s 00:34:36.219 19:11:20 -- pm/common@25 -- $ sleep 1 00:34:36.219 19:11:20 -- pm/common@21 -- $ date +%s 00:34:36.219 19:11:20 -- pm/common@21 -- $ date +%s 00:34:36.219 19:11:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841080 00:34:36.220 19:11:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841080 00:34:36.220 19:11:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841080 00:34:36.220 19:11:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841080 00:34:36.220 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841080_collect-vmstat.pm.log 00:34:36.220 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841080_collect-cpu-load.pm.log 00:34:36.220 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841080_collect-cpu-temp.pm.log 00:34:36.220 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841080_collect-bmc-pm.bmc.pm.log 00:34:37.158 19:11:21 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:37.158 19:11:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:34:37.158 19:11:21 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:37.158 19:11:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:37.158 19:11:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:37.158 19:11:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:37.158 19:11:21 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:37.158 19:11:21 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:37.158 19:11:21 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:37.158 19:11:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:37.158 19:11:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:37.158 19:11:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:37.158 19:11:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:37.158 19:11:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.158 19:11:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:37.158 19:11:22 -- pm/common@44 -- $ pid=2749360 00:34:37.158 19:11:22 -- pm/common@50 -- $ kill -TERM 2749360 00:34:37.158 19:11:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.158 19:11:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:37.158 19:11:22 -- pm/common@44 -- $ pid=2749361 00:34:37.158 19:11:22 -- pm/common@50 -- $ kill -TERM 2749361 00:34:37.158 19:11:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.158 19:11:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:37.158 19:11:22 -- pm/common@44 -- $ pid=2749363 00:34:37.158 19:11:22 -- pm/common@50 -- $ kill -TERM 2749363 00:34:37.158 19:11:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.158 19:11:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:37.158 19:11:22 -- pm/common@44 -- $ pid=2749391 00:34:37.158 19:11:22 -- pm/common@50 -- $ sudo -E kill -TERM 2749391 00:34:37.158 + [[ -n 2182864 ]] 00:34:37.158 + sudo kill 2182864 00:34:37.169 [Pipeline] } 00:34:37.188 [Pipeline] // stage 00:34:37.194 [Pipeline] } 00:34:37.213 [Pipeline] // timeout 00:34:37.219 [Pipeline] } 00:34:37.238 [Pipeline] // catchError 00:34:37.243 [Pipeline] } 00:34:37.262 [Pipeline] // wrap 00:34:37.269 [Pipeline] } 00:34:37.285 [Pipeline] // catchError 00:34:37.296 [Pipeline] stage 00:34:37.298 [Pipeline] { (Epilogue) 00:34:37.313 [Pipeline] catchError 00:34:37.315 [Pipeline] { 00:34:37.331 [Pipeline] echo 00:34:37.332 Cleanup processes 00:34:37.338 [Pipeline] sh 00:34:37.689 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:37.689 2749477 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:37.689 2749809 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:37.704 [Pipeline] sh 00:34:37.989 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:37.989 ++ grep -v 'sudo pgrep' 00:34:37.989 ++ awk '{print $1}' 00:34:37.989 + sudo kill -9 2749477 00:34:38.001 [Pipeline] sh 00:34:38.284 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:53.180 [Pipeline] sh 00:34:53.465 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:53.465 Artifacts sizes are good 00:34:53.480 [Pipeline] archiveArtifacts 00:34:53.487 Archiving artifacts 00:34:53.695 [Pipeline] sh 00:34:53.980 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:53.994 [Pipeline] cleanWs 00:34:54.004 [WS-CLEANUP] Deleting project workspace... 00:34:54.005 [WS-CLEANUP] Deferred wipeout is used... 00:34:54.011 [WS-CLEANUP] done 00:34:54.013 [Pipeline] } 00:34:54.034 [Pipeline] // catchError 00:34:54.048 [Pipeline] sh 00:34:54.330 + logger -p user.info -t JENKINS-CI 00:34:54.340 [Pipeline] } 00:34:54.357 [Pipeline] // stage 00:34:54.362 [Pipeline] } 00:34:54.381 [Pipeline] // node 00:34:54.387 [Pipeline] End of Pipeline 00:34:54.415 Finished: SUCCESS